Test Report: KVM_Linux_crio 19478

                    
                      cdbac7a92b6ef0941d2ffc9877dc4d64cf2ec5e1:2024-08-19:35858
                    
                

Test fail (29/318)

Order failed test Duration
34 TestAddons/parallel/Ingress 154.89
36 TestAddons/parallel/MetricsServer 357.91
45 TestAddons/StoppedEnableDisable 154.27
164 TestMultiControlPlane/serial/StopSecondaryNode 141.66
166 TestMultiControlPlane/serial/RestartSecondaryNode 62.09
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 297.85
171 TestMultiControlPlane/serial/StopCluster 141.75
231 TestMultiNode/serial/RestartKeepsNodes 325.18
233 TestMultiNode/serial/StopMultiNode 141.12
240 TestPreload 272.23
248 TestKubernetesUpgrade 379.61
291 TestStartStop/group/old-k8s-version/serial/FirstStart 272.71
298 TestStartStop/group/no-preload/serial/Stop 139
301 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.1
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
309 TestStartStop/group/old-k8s-version/serial/DeployApp 0.46
310 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 106.34
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
317 TestStartStop/group/old-k8s-version/serial/SecondStart 703.91
326 TestStartStop/group/embed-certs/serial/Stop 139.17
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
329 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.19
330 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.32
331 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.64
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 430.2
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.18
334 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 323.87
335 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 174.1
359 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 387.32
x
+
TestAddons/parallel/Ingress (154.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-825243 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-825243 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-825243 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [150e23cd-36ab-477d-80fd-445d04acef1c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [150e23cd-36ab-477d-80fd-445d04acef1c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004560745s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-825243 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.033148018s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-825243 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.129
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-825243 addons disable ingress-dns --alsologtostderr -v=1: (1.397308755s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-825243 addons disable ingress --alsologtostderr -v=1: (7.657043938s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-825243 -n addons-825243
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-825243 logs -n 25: (1.109428301s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-258496                                                                     | download-only-258496 | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC | 19 Aug 24 16:53 UTC |
	| delete  | -p download-only-444293                                                                     | download-only-444293 | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC | 19 Aug 24 16:53 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-174718 | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC |                     |
	|         | binary-mirror-174718                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38627                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-174718                                                                     | binary-mirror-174718 | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC | 19 Aug 24 16:53 UTC |
	| addons  | disable dashboard -p                                                                        | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC |                     |
	|         | addons-825243                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC |                     |
	|         | addons-825243                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-825243 --wait=true                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC | 19 Aug 24 16:55 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:55 UTC | 19 Aug 24 16:55 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:55 UTC | 19 Aug 24 16:55 UTC |
	|         | addons-825243                                                                               |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:55 UTC | 19 Aug 24 16:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-825243 ssh cat                                                                       | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | /opt/local-path-provisioner/pvc-63640194-31bc-4782-b58f-2706becef52c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | -p addons-825243                                                                            |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | -p addons-825243                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-825243 ip                                                                            | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | addons-825243                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-825243 ssh curl -s                                                                   | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-825243 addons                                                                        | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-825243 addons                                                                        | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-825243 ip                                                                            | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:58 UTC | 19 Aug 24 16:58 UTC |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:58 UTC | 19 Aug 24 16:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:58 UTC | 19 Aug 24 16:58 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 16:53:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 16:53:08.536296   18587 out.go:345] Setting OutFile to fd 1 ...
	I0819 16:53:08.536789   18587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 16:53:08.536838   18587 out.go:358] Setting ErrFile to fd 2...
	I0819 16:53:08.536856   18587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 16:53:08.537294   18587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 16:53:08.538286   18587 out.go:352] Setting JSON to false
	I0819 16:53:08.539076   18587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2134,"bootTime":1724084255,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 16:53:08.539132   18587 start.go:139] virtualization: kvm guest
	I0819 16:53:08.541156   18587 out.go:177] * [addons-825243] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 16:53:08.542654   18587 notify.go:220] Checking for updates...
	I0819 16:53:08.542667   18587 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 16:53:08.544423   18587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 16:53:08.545926   18587 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 16:53:08.547474   18587 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 16:53:08.548867   18587 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 16:53:08.550252   18587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 16:53:08.551826   18587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 16:53:08.583528   18587 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 16:53:08.584876   18587 start.go:297] selected driver: kvm2
	I0819 16:53:08.584890   18587 start.go:901] validating driver "kvm2" against <nil>
	I0819 16:53:08.584901   18587 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 16:53:08.585621   18587 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 16:53:08.585692   18587 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 16:53:08.600403   18587 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 16:53:08.600460   18587 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 16:53:08.600683   18587 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 16:53:08.600745   18587 cni.go:84] Creating CNI manager for ""
	I0819 16:53:08.600782   18587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 16:53:08.600797   18587 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 16:53:08.600856   18587 start.go:340] cluster config:
	{Name:addons-825243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-825243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 16:53:08.600954   18587 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 16:53:08.602805   18587 out.go:177] * Starting "addons-825243" primary control-plane node in "addons-825243" cluster
	I0819 16:53:08.604274   18587 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 16:53:08.604319   18587 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 16:53:08.604340   18587 cache.go:56] Caching tarball of preloaded images
	I0819 16:53:08.604433   18587 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 16:53:08.604448   18587 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 16:53:08.604737   18587 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/config.json ...
	I0819 16:53:08.604778   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/config.json: {Name:mk03102e743c14e50e5d12b93edfed098d134cb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:08.604954   18587 start.go:360] acquireMachinesLock for addons-825243: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 16:53:08.605016   18587 start.go:364] duration metric: took 44.552µs to acquireMachinesLock for "addons-825243"
	I0819 16:53:08.605043   18587 start.go:93] Provisioning new machine with config: &{Name:addons-825243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-825243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 16:53:08.605108   18587 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 16:53:08.606990   18587 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 16:53:08.607139   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:08.607181   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:08.621417   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0819 16:53:08.621808   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:08.622285   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:08.622327   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:08.622647   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:08.622817   18587 main.go:141] libmachine: (addons-825243) Calling .GetMachineName
	I0819 16:53:08.622946   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:08.623071   18587 start.go:159] libmachine.API.Create for "addons-825243" (driver="kvm2")
	I0819 16:53:08.623093   18587 client.go:168] LocalClient.Create starting
	I0819 16:53:08.623126   18587 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 16:53:08.673646   18587 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 16:53:08.846596   18587 main.go:141] libmachine: Running pre-create checks...
	I0819 16:53:08.846620   18587 main.go:141] libmachine: (addons-825243) Calling .PreCreateCheck
	I0819 16:53:08.847145   18587 main.go:141] libmachine: (addons-825243) Calling .GetConfigRaw
	I0819 16:53:08.847601   18587 main.go:141] libmachine: Creating machine...
	I0819 16:53:08.847615   18587 main.go:141] libmachine: (addons-825243) Calling .Create
	I0819 16:53:08.847768   18587 main.go:141] libmachine: (addons-825243) Creating KVM machine...
	I0819 16:53:08.849062   18587 main.go:141] libmachine: (addons-825243) DBG | found existing default KVM network
	I0819 16:53:08.849703   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:08.849558   18609 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0819 16:53:08.849725   18587 main.go:141] libmachine: (addons-825243) DBG | created network xml: 
	I0819 16:53:08.849738   18587 main.go:141] libmachine: (addons-825243) DBG | <network>
	I0819 16:53:08.849749   18587 main.go:141] libmachine: (addons-825243) DBG |   <name>mk-addons-825243</name>
	I0819 16:53:08.849760   18587 main.go:141] libmachine: (addons-825243) DBG |   <dns enable='no'/>
	I0819 16:53:08.849772   18587 main.go:141] libmachine: (addons-825243) DBG |   
	I0819 16:53:08.849783   18587 main.go:141] libmachine: (addons-825243) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 16:53:08.849790   18587 main.go:141] libmachine: (addons-825243) DBG |     <dhcp>
	I0819 16:53:08.849845   18587 main.go:141] libmachine: (addons-825243) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 16:53:08.849867   18587 main.go:141] libmachine: (addons-825243) DBG |     </dhcp>
	I0819 16:53:08.849875   18587 main.go:141] libmachine: (addons-825243) DBG |   </ip>
	I0819 16:53:08.849884   18587 main.go:141] libmachine: (addons-825243) DBG |   
	I0819 16:53:08.849892   18587 main.go:141] libmachine: (addons-825243) DBG | </network>
	I0819 16:53:08.849900   18587 main.go:141] libmachine: (addons-825243) DBG | 
	I0819 16:53:08.855642   18587 main.go:141] libmachine: (addons-825243) DBG | trying to create private KVM network mk-addons-825243 192.168.39.0/24...
	I0819 16:53:08.921593   18587 main.go:141] libmachine: (addons-825243) DBG | private KVM network mk-addons-825243 192.168.39.0/24 created
	I0819 16:53:08.921629   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:08.921527   18609 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 16:53:08.921644   18587 main.go:141] libmachine: (addons-825243) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243 ...
	I0819 16:53:08.921659   18587 main.go:141] libmachine: (addons-825243) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 16:53:08.921671   18587 main.go:141] libmachine: (addons-825243) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 16:53:09.207395   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:09.207287   18609 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa...
	I0819 16:53:09.483143   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:09.483023   18609 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/addons-825243.rawdisk...
	I0819 16:53:09.483180   18587 main.go:141] libmachine: (addons-825243) DBG | Writing magic tar header
	I0819 16:53:09.483190   18587 main.go:141] libmachine: (addons-825243) DBG | Writing SSH key tar header
	I0819 16:53:09.483198   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:09.483133   18609 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243 ...
	I0819 16:53:09.483297   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243 (perms=drwx------)
	I0819 16:53:09.483320   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243
	I0819 16:53:09.483328   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 16:53:09.483335   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 16:53:09.483342   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 16:53:09.483352   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 16:53:09.483358   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 16:53:09.483365   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 16:53:09.483369   18587 main.go:141] libmachine: (addons-825243) Creating domain...
	I0819 16:53:09.483378   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 16:53:09.483385   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 16:53:09.483393   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 16:53:09.483398   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins
	I0819 16:53:09.483406   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home
	I0819 16:53:09.483415   18587 main.go:141] libmachine: (addons-825243) DBG | Skipping /home - not owner
	I0819 16:53:09.484369   18587 main.go:141] libmachine: (addons-825243) define libvirt domain using xml: 
	I0819 16:53:09.484388   18587 main.go:141] libmachine: (addons-825243) <domain type='kvm'>
	I0819 16:53:09.484399   18587 main.go:141] libmachine: (addons-825243)   <name>addons-825243</name>
	I0819 16:53:09.484408   18587 main.go:141] libmachine: (addons-825243)   <memory unit='MiB'>4000</memory>
	I0819 16:53:09.484416   18587 main.go:141] libmachine: (addons-825243)   <vcpu>2</vcpu>
	I0819 16:53:09.484429   18587 main.go:141] libmachine: (addons-825243)   <features>
	I0819 16:53:09.484441   18587 main.go:141] libmachine: (addons-825243)     <acpi/>
	I0819 16:53:09.484447   18587 main.go:141] libmachine: (addons-825243)     <apic/>
	I0819 16:53:09.484457   18587 main.go:141] libmachine: (addons-825243)     <pae/>
	I0819 16:53:09.484461   18587 main.go:141] libmachine: (addons-825243)     
	I0819 16:53:09.484487   18587 main.go:141] libmachine: (addons-825243)   </features>
	I0819 16:53:09.484515   18587 main.go:141] libmachine: (addons-825243)   <cpu mode='host-passthrough'>
	I0819 16:53:09.484523   18587 main.go:141] libmachine: (addons-825243)   
	I0819 16:53:09.484540   18587 main.go:141] libmachine: (addons-825243)   </cpu>
	I0819 16:53:09.484546   18587 main.go:141] libmachine: (addons-825243)   <os>
	I0819 16:53:09.484550   18587 main.go:141] libmachine: (addons-825243)     <type>hvm</type>
	I0819 16:53:09.484555   18587 main.go:141] libmachine: (addons-825243)     <boot dev='cdrom'/>
	I0819 16:53:09.484566   18587 main.go:141] libmachine: (addons-825243)     <boot dev='hd'/>
	I0819 16:53:09.484576   18587 main.go:141] libmachine: (addons-825243)     <bootmenu enable='no'/>
	I0819 16:53:09.484584   18587 main.go:141] libmachine: (addons-825243)   </os>
	I0819 16:53:09.484591   18587 main.go:141] libmachine: (addons-825243)   <devices>
	I0819 16:53:09.484599   18587 main.go:141] libmachine: (addons-825243)     <disk type='file' device='cdrom'>
	I0819 16:53:09.484607   18587 main.go:141] libmachine: (addons-825243)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/boot2docker.iso'/>
	I0819 16:53:09.484613   18587 main.go:141] libmachine: (addons-825243)       <target dev='hdc' bus='scsi'/>
	I0819 16:53:09.484620   18587 main.go:141] libmachine: (addons-825243)       <readonly/>
	I0819 16:53:09.484624   18587 main.go:141] libmachine: (addons-825243)     </disk>
	I0819 16:53:09.484630   18587 main.go:141] libmachine: (addons-825243)     <disk type='file' device='disk'>
	I0819 16:53:09.484640   18587 main.go:141] libmachine: (addons-825243)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 16:53:09.484649   18587 main.go:141] libmachine: (addons-825243)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/addons-825243.rawdisk'/>
	I0819 16:53:09.484659   18587 main.go:141] libmachine: (addons-825243)       <target dev='hda' bus='virtio'/>
	I0819 16:53:09.484664   18587 main.go:141] libmachine: (addons-825243)     </disk>
	I0819 16:53:09.484669   18587 main.go:141] libmachine: (addons-825243)     <interface type='network'>
	I0819 16:53:09.484677   18587 main.go:141] libmachine: (addons-825243)       <source network='mk-addons-825243'/>
	I0819 16:53:09.484681   18587 main.go:141] libmachine: (addons-825243)       <model type='virtio'/>
	I0819 16:53:09.484686   18587 main.go:141] libmachine: (addons-825243)     </interface>
	I0819 16:53:09.484693   18587 main.go:141] libmachine: (addons-825243)     <interface type='network'>
	I0819 16:53:09.484699   18587 main.go:141] libmachine: (addons-825243)       <source network='default'/>
	I0819 16:53:09.484706   18587 main.go:141] libmachine: (addons-825243)       <model type='virtio'/>
	I0819 16:53:09.484711   18587 main.go:141] libmachine: (addons-825243)     </interface>
	I0819 16:53:09.484718   18587 main.go:141] libmachine: (addons-825243)     <serial type='pty'>
	I0819 16:53:09.484731   18587 main.go:141] libmachine: (addons-825243)       <target port='0'/>
	I0819 16:53:09.484742   18587 main.go:141] libmachine: (addons-825243)     </serial>
	I0819 16:53:09.484769   18587 main.go:141] libmachine: (addons-825243)     <console type='pty'>
	I0819 16:53:09.484785   18587 main.go:141] libmachine: (addons-825243)       <target type='serial' port='0'/>
	I0819 16:53:09.484795   18587 main.go:141] libmachine: (addons-825243)     </console>
	I0819 16:53:09.484801   18587 main.go:141] libmachine: (addons-825243)     <rng model='virtio'>
	I0819 16:53:09.484821   18587 main.go:141] libmachine: (addons-825243)       <backend model='random'>/dev/random</backend>
	I0819 16:53:09.484839   18587 main.go:141] libmachine: (addons-825243)     </rng>
	I0819 16:53:09.484851   18587 main.go:141] libmachine: (addons-825243)     
	I0819 16:53:09.484861   18587 main.go:141] libmachine: (addons-825243)     
	I0819 16:53:09.484869   18587 main.go:141] libmachine: (addons-825243)   </devices>
	I0819 16:53:09.484878   18587 main.go:141] libmachine: (addons-825243) </domain>
	I0819 16:53:09.484889   18587 main.go:141] libmachine: (addons-825243) 
	I0819 16:53:09.491278   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:84:2c:54 in network default
	I0819 16:53:09.491999   18587 main.go:141] libmachine: (addons-825243) Ensuring networks are active...
	I0819 16:53:09.492018   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:09.492877   18587 main.go:141] libmachine: (addons-825243) Ensuring network default is active
	I0819 16:53:09.493203   18587 main.go:141] libmachine: (addons-825243) Ensuring network mk-addons-825243 is active
	I0819 16:53:09.494654   18587 main.go:141] libmachine: (addons-825243) Getting domain xml...
	I0819 16:53:09.495463   18587 main.go:141] libmachine: (addons-825243) Creating domain...
	I0819 16:53:11.125700   18587 main.go:141] libmachine: (addons-825243) Waiting to get IP...
	I0819 16:53:11.126626   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:11.127108   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:11.127161   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:11.127109   18609 retry.go:31] will retry after 284.983674ms: waiting for machine to come up
	I0819 16:53:11.413634   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:11.413967   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:11.413993   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:11.413910   18609 retry.go:31] will retry after 285.340726ms: waiting for machine to come up
	I0819 16:53:11.700258   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:11.700811   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:11.700836   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:11.700645   18609 retry.go:31] will retry after 472.018783ms: waiting for machine to come up
	I0819 16:53:12.173955   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:12.174450   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:12.174504   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:12.174413   18609 retry.go:31] will retry after 529.719767ms: waiting for machine to come up
	I0819 16:53:12.706375   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:12.706817   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:12.706845   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:12.706759   18609 retry.go:31] will retry after 634.102418ms: waiting for machine to come up
	I0819 16:53:13.342676   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:13.343033   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:13.343060   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:13.342986   18609 retry.go:31] will retry after 691.330212ms: waiting for machine to come up
	I0819 16:53:14.035619   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:14.035976   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:14.035999   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:14.035930   18609 retry.go:31] will retry after 876.541685ms: waiting for machine to come up
	I0819 16:53:14.913784   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:14.914194   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:14.914217   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:14.914150   18609 retry.go:31] will retry after 1.483212916s: waiting for machine to come up
	I0819 16:53:16.399732   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:16.400330   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:16.400355   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:16.400224   18609 retry.go:31] will retry after 1.267260439s: waiting for machine to come up
	I0819 16:53:17.669612   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:17.669991   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:17.670034   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:17.669944   18609 retry.go:31] will retry after 2.227693563s: waiting for machine to come up
	I0819 16:53:19.899042   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:19.899473   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:19.899505   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:19.899397   18609 retry.go:31] will retry after 2.167227329s: waiting for machine to come up
	I0819 16:53:22.069710   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:22.070126   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:22.070155   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:22.070059   18609 retry.go:31] will retry after 3.431382951s: waiting for machine to come up
	I0819 16:53:25.504118   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:25.504523   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:25.504542   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:25.504478   18609 retry.go:31] will retry after 4.43401048s: waiting for machine to come up
	I0819 16:53:29.939874   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:29.940324   18587 main.go:141] libmachine: (addons-825243) Found IP for machine: 192.168.39.129
	I0819 16:53:29.940358   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has current primary IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:29.940367   18587 main.go:141] libmachine: (addons-825243) Reserving static IP address...
	I0819 16:53:29.940787   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find host DHCP lease matching {name: "addons-825243", mac: "52:54:00:fc:11:a2", ip: "192.168.39.129"} in network mk-addons-825243
	I0819 16:53:30.012100   18587 main.go:141] libmachine: (addons-825243) DBG | Getting to WaitForSSH function...
	I0819 16:53:30.012122   18587 main.go:141] libmachine: (addons-825243) Reserved static IP address: 192.168.39.129
	I0819 16:53:30.012134   18587 main.go:141] libmachine: (addons-825243) Waiting for SSH to be available...
	I0819 16:53:30.014643   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.015032   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.015077   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.015151   18587 main.go:141] libmachine: (addons-825243) DBG | Using SSH client type: external
	I0819 16:53:30.015204   18587 main.go:141] libmachine: (addons-825243) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa (-rw-------)
	I0819 16:53:30.015260   18587 main.go:141] libmachine: (addons-825243) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 16:53:30.015274   18587 main.go:141] libmachine: (addons-825243) DBG | About to run SSH command:
	I0819 16:53:30.015284   18587 main.go:141] libmachine: (addons-825243) DBG | exit 0
	I0819 16:53:30.148598   18587 main.go:141] libmachine: (addons-825243) DBG | SSH cmd err, output: <nil>: 
	I0819 16:53:30.148907   18587 main.go:141] libmachine: (addons-825243) KVM machine creation complete!
	I0819 16:53:30.149170   18587 main.go:141] libmachine: (addons-825243) Calling .GetConfigRaw
	I0819 16:53:30.149722   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:30.149875   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:30.150020   18587 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 16:53:30.150033   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:30.151330   18587 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 16:53:30.151344   18587 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 16:53:30.151351   18587 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 16:53:30.151357   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.153512   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.153837   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.153867   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.154001   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.154154   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.154301   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.154447   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.154571   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:30.154773   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:30.154786   18587 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 16:53:30.263931   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 16:53:30.263961   18587 main.go:141] libmachine: Detecting the provisioner...
	I0819 16:53:30.263972   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.266534   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.266902   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.266943   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.267092   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.267288   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.267445   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.267568   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.267721   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:30.267912   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:30.267926   18587 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 16:53:30.377151   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 16:53:30.377213   18587 main.go:141] libmachine: found compatible host: buildroot
	I0819 16:53:30.377222   18587 main.go:141] libmachine: Provisioning with buildroot...
	I0819 16:53:30.377244   18587 main.go:141] libmachine: (addons-825243) Calling .GetMachineName
	I0819 16:53:30.377515   18587 buildroot.go:166] provisioning hostname "addons-825243"
	I0819 16:53:30.377549   18587 main.go:141] libmachine: (addons-825243) Calling .GetMachineName
	I0819 16:53:30.377769   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.380025   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.380306   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.380357   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.380466   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.380711   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.380900   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.381047   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.381200   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:30.381414   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:30.381432   18587 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-825243 && echo "addons-825243" | sudo tee /etc/hostname
	I0819 16:53:30.501817   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-825243
	
	I0819 16:53:30.501840   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.504705   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.505133   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.505165   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.505318   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.505568   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.505744   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.505877   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.506011   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:30.506177   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:30.506192   18587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-825243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-825243/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-825243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 16:53:30.620583   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 16:53:30.620614   18587 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 16:53:30.620634   18587 buildroot.go:174] setting up certificates
	I0819 16:53:30.620644   18587 provision.go:84] configureAuth start
	I0819 16:53:30.620653   18587 main.go:141] libmachine: (addons-825243) Calling .GetMachineName
	I0819 16:53:30.620933   18587 main.go:141] libmachine: (addons-825243) Calling .GetIP
	I0819 16:53:30.623515   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.623848   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.623874   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.624044   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.626076   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.626376   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.626403   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.626523   18587 provision.go:143] copyHostCerts
	I0819 16:53:30.626595   18587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 16:53:30.626776   18587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 16:53:30.626872   18587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 16:53:30.626963   18587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.addons-825243 san=[127.0.0.1 192.168.39.129 addons-825243 localhost minikube]
	I0819 16:53:30.799091   18587 provision.go:177] copyRemoteCerts
	I0819 16:53:30.799142   18587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 16:53:30.799163   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.801644   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.801991   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.802019   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.802197   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.802450   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.802594   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.802753   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:30.887264   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 16:53:30.909649   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 16:53:30.930958   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 16:53:30.952658   18587 provision.go:87] duration metric: took 332.001257ms to configureAuth
	I0819 16:53:30.952688   18587 buildroot.go:189] setting minikube options for container-runtime
	I0819 16:53:30.952932   18587 config.go:182] Loaded profile config "addons-825243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 16:53:30.953077   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.955645   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.956015   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.956044   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.956304   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.956511   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.956709   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.956889   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.957023   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:30.957198   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:30.957214   18587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 16:53:31.221019   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 16:53:31.221058   18587 main.go:141] libmachine: Checking connection to Docker...
	I0819 16:53:31.221072   18587 main.go:141] libmachine: (addons-825243) Calling .GetURL
	I0819 16:53:31.222369   18587 main.go:141] libmachine: (addons-825243) DBG | Using libvirt version 6000000
	I0819 16:53:31.224344   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.224705   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.224733   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.224936   18587 main.go:141] libmachine: Docker is up and running!
	I0819 16:53:31.224952   18587 main.go:141] libmachine: Reticulating splines...
	I0819 16:53:31.224958   18587 client.go:171] duration metric: took 22.601858712s to LocalClient.Create
	I0819 16:53:31.224976   18587 start.go:167] duration metric: took 22.601906283s to libmachine.API.Create "addons-825243"
	I0819 16:53:31.224985   18587 start.go:293] postStartSetup for "addons-825243" (driver="kvm2")
	I0819 16:53:31.224994   18587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 16:53:31.225010   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:31.225251   18587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 16:53:31.225274   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:31.227188   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.227580   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.227608   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.227681   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:31.227854   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:31.228046   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:31.228195   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:31.311059   18587 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 16:53:31.315003   18587 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 16:53:31.315030   18587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 16:53:31.315108   18587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 16:53:31.315137   18587 start.go:296] duration metric: took 90.14732ms for postStartSetup
	I0819 16:53:31.315191   18587 main.go:141] libmachine: (addons-825243) Calling .GetConfigRaw
	I0819 16:53:31.315786   18587 main.go:141] libmachine: (addons-825243) Calling .GetIP
	I0819 16:53:31.318474   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.318800   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.318827   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.319090   18587 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/config.json ...
	I0819 16:53:31.319347   18587 start.go:128] duration metric: took 22.714227457s to createHost
	I0819 16:53:31.319376   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:31.321718   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.322089   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.322118   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.322231   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:31.322416   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:31.322606   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:31.322759   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:31.322967   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:31.323144   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:31.323157   18587 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 16:53:31.433180   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724086411.406651672
	
	I0819 16:53:31.433208   18587 fix.go:216] guest clock: 1724086411.406651672
	I0819 16:53:31.433219   18587 fix.go:229] Guest: 2024-08-19 16:53:31.406651672 +0000 UTC Remote: 2024-08-19 16:53:31.319362036 +0000 UTC m=+22.815660156 (delta=87.289636ms)
	I0819 16:53:31.433249   18587 fix.go:200] guest clock delta is within tolerance: 87.289636ms
	I0819 16:53:31.433259   18587 start.go:83] releasing machines lock for "addons-825243", held for 22.828227323s
	I0819 16:53:31.433293   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:31.433566   18587 main.go:141] libmachine: (addons-825243) Calling .GetIP
	I0819 16:53:31.436318   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.436675   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.436702   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.436825   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:31.437298   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:31.437516   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:31.437596   18587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 16:53:31.437656   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:31.437718   18587 ssh_runner.go:195] Run: cat /version.json
	I0819 16:53:31.437741   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:31.440062   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.440353   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.440391   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.440410   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.440489   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:31.440636   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:31.440793   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:31.440894   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.440915   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.440943   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:31.441080   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:31.441278   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:31.441449   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:31.441586   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:31.574797   18587 ssh_runner.go:195] Run: systemctl --version
	I0819 16:53:31.580489   18587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 16:53:31.732117   18587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 16:53:31.737971   18587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 16:53:31.738025   18587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 16:53:31.752301   18587 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 16:53:31.752321   18587 start.go:495] detecting cgroup driver to use...
	I0819 16:53:31.752377   18587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 16:53:31.768727   18587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 16:53:31.782325   18587 docker.go:217] disabling cri-docker service (if available) ...
	I0819 16:53:31.782385   18587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 16:53:31.795610   18587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 16:53:31.808951   18587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 16:53:31.914199   18587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 16:53:32.063850   18587 docker.go:233] disabling docker service ...
	I0819 16:53:32.063923   18587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 16:53:32.077510   18587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 16:53:32.089548   18587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 16:53:32.220361   18587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 16:53:32.347506   18587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 16:53:32.359855   18587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 16:53:32.376158   18587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 16:53:32.376221   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.385180   18587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 16:53:32.385233   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.394239   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.403073   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.411946   18587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 16:53:32.421088   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.430048   18587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.446015   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.455316   18587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 16:53:32.463762   18587 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 16:53:32.463818   18587 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 16:53:32.475760   18587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 16:53:32.484331   18587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 16:53:32.615165   18587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 16:53:32.744212   18587 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 16:53:32.744298   18587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 16:53:32.748582   18587 start.go:563] Will wait 60s for crictl version
	I0819 16:53:32.748638   18587 ssh_runner.go:195] Run: which crictl
	I0819 16:53:32.752028   18587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 16:53:32.786462   18587 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 16:53:32.786579   18587 ssh_runner.go:195] Run: crio --version
	I0819 16:53:32.813073   18587 ssh_runner.go:195] Run: crio --version
	I0819 16:53:32.841172   18587 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 16:53:32.842568   18587 main.go:141] libmachine: (addons-825243) Calling .GetIP
	I0819 16:53:32.844961   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:32.845237   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:32.845261   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:32.845504   18587 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 16:53:32.849155   18587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 16:53:32.860951   18587 kubeadm.go:883] updating cluster {Name:addons-825243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-825243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 16:53:32.861102   18587 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 16:53:32.861172   18587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 16:53:32.894853   18587 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 16:53:32.894922   18587 ssh_runner.go:195] Run: which lz4
	I0819 16:53:32.898456   18587 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 16:53:32.902055   18587 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 16:53:32.902077   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 16:53:34.032832   18587 crio.go:462] duration metric: took 1.134399043s to copy over tarball
	I0819 16:53:34.032892   18587 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 16:53:36.098175   18587 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.065253953s)
	I0819 16:53:36.098204   18587 crio.go:469] duration metric: took 2.065349568s to extract the tarball
	I0819 16:53:36.098210   18587 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 16:53:36.134302   18587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 16:53:36.172698   18587 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 16:53:36.172720   18587 cache_images.go:84] Images are preloaded, skipping loading
	I0819 16:53:36.172728   18587 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.0 crio true true} ...
	I0819 16:53:36.172841   18587 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-825243 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-825243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 16:53:36.172908   18587 ssh_runner.go:195] Run: crio config
	I0819 16:53:36.216505   18587 cni.go:84] Creating CNI manager for ""
	I0819 16:53:36.216522   18587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 16:53:36.216533   18587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 16:53:36.216553   18587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-825243 NodeName:addons-825243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 16:53:36.216732   18587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-825243"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 16:53:36.216809   18587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 16:53:36.226099   18587 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 16:53:36.226168   18587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 16:53:36.234940   18587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 16:53:36.249484   18587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 16:53:36.264192   18587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 16:53:36.279091   18587 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0819 16:53:36.282440   18587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 16:53:36.293119   18587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 16:53:36.409356   18587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 16:53:36.425083   18587 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243 for IP: 192.168.39.129
	I0819 16:53:36.425107   18587 certs.go:194] generating shared ca certs ...
	I0819 16:53:36.425129   18587 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.425288   18587 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 16:53:36.554684   18587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt ...
	I0819 16:53:36.554712   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt: {Name:mkd8aac57f38305eebc3e70a3c299ec6319330da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.554878   18587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key ...
	I0819 16:53:36.554889   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key: {Name:mkb11833b68a299c4cc435820a97207697d835b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.554957   18587 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 16:53:36.734115   18587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt ...
	I0819 16:53:36.734143   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt: {Name:mk66fe69cc91ada8d79a785e88eb420be90ed98f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.734286   18587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key ...
	I0819 16:53:36.734298   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key: {Name:mk5aea4d87875f2ef5a82db7cdaada987d64c4ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.734362   18587 certs.go:256] generating profile certs ...
	I0819 16:53:36.734411   18587 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.key
	I0819 16:53:36.734431   18587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt with IP's: []
	I0819 16:53:36.783711   18587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt ...
	I0819 16:53:36.783735   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: {Name:mkf1e36c1ca10fb8a2556accec6a5bea26a80421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.783870   18587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.key ...
	I0819 16:53:36.783880   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.key: {Name:mkc7eda253cff4b6cd49b3cea00744ca86cf5a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.783940   18587 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key.ac98efbf
	I0819 16:53:36.783957   18587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt.ac98efbf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.129]
	I0819 16:53:36.957886   18587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt.ac98efbf ...
	I0819 16:53:36.957917   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt.ac98efbf: {Name:mk458cd92693e214fb34fbded3481267662e7b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.958074   18587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key.ac98efbf ...
	I0819 16:53:36.958086   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key.ac98efbf: {Name:mkbc354f736341260a433d039e888aaf67f14dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.958153   18587 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt.ac98efbf -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt
	I0819 16:53:36.958237   18587 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key.ac98efbf -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key
	I0819 16:53:36.958285   18587 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.key
	I0819 16:53:36.958316   18587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.crt with IP's: []
	I0819 16:53:37.233250   18587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.crt ...
	I0819 16:53:37.233279   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.crt: {Name:mk8cf6ef0fb7e7386eac5532fa835bd2720bd30e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:37.233471   18587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.key ...
	I0819 16:53:37.233489   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.key: {Name:mkeaa40640a707f170ff9c5f21c5f43bdb8d2e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:37.233703   18587 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 16:53:37.233746   18587 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 16:53:37.233781   18587 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 16:53:37.233811   18587 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 16:53:37.234358   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 16:53:37.259542   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 16:53:37.296937   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 16:53:37.325078   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 16:53:37.345832   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 16:53:37.371630   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 16:53:37.392630   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 16:53:37.414151   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 16:53:37.435517   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 16:53:37.456119   18587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 16:53:37.470525   18587 ssh_runner.go:195] Run: openssl version
	I0819 16:53:37.475661   18587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 16:53:37.485120   18587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 16:53:37.489127   18587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 16:53:37.489176   18587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 16:53:37.494503   18587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 16:53:37.503908   18587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 16:53:37.507557   18587 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 16:53:37.507605   18587 kubeadm.go:392] StartCluster: {Name:addons-825243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-825243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 16:53:37.507672   18587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 16:53:37.507707   18587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 16:53:37.541362   18587 cri.go:89] found id: ""
	I0819 16:53:37.541421   18587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 16:53:37.550424   18587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 16:53:37.559168   18587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 16:53:37.567609   18587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 16:53:37.567626   18587 kubeadm.go:157] found existing configuration files:
	
	I0819 16:53:37.567672   18587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 16:53:37.575702   18587 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 16:53:37.575751   18587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 16:53:37.584198   18587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 16:53:37.592464   18587 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 16:53:37.592517   18587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 16:53:37.600974   18587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 16:53:37.609113   18587 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 16:53:37.609173   18587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 16:53:37.617676   18587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 16:53:37.625742   18587 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 16:53:37.625802   18587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 16:53:37.634307   18587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 16:53:37.687391   18587 kubeadm.go:310] W0819 16:53:37.668313     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 16:53:37.688033   18587 kubeadm.go:310] W0819 16:53:37.669257     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 16:53:37.785723   18587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 16:53:48.015049   18587 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 16:53:48.015099   18587 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 16:53:48.015219   18587 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 16:53:48.015392   18587 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 16:53:48.015514   18587 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 16:53:48.015602   18587 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 16:53:48.017055   18587 out.go:235]   - Generating certificates and keys ...
	I0819 16:53:48.017148   18587 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 16:53:48.017229   18587 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 16:53:48.017359   18587 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 16:53:48.017438   18587 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 16:53:48.017538   18587 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 16:53:48.017629   18587 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 16:53:48.017709   18587 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 16:53:48.017870   18587 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-825243 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0819 16:53:48.017947   18587 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 16:53:48.018106   18587 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-825243 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0819 16:53:48.018171   18587 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 16:53:48.018224   18587 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 16:53:48.018264   18587 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 16:53:48.018313   18587 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 16:53:48.018366   18587 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 16:53:48.018417   18587 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 16:53:48.018461   18587 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 16:53:48.018521   18587 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 16:53:48.018594   18587 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 16:53:48.018668   18587 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 16:53:48.018750   18587 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 16:53:48.020017   18587 out.go:235]   - Booting up control plane ...
	I0819 16:53:48.020124   18587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 16:53:48.020197   18587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 16:53:48.020253   18587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 16:53:48.020347   18587 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 16:53:48.020462   18587 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 16:53:48.020500   18587 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 16:53:48.020596   18587 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 16:53:48.020704   18587 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 16:53:48.020780   18587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.123782ms
	I0819 16:53:48.020861   18587 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 16:53:48.020958   18587 kubeadm.go:310] [api-check] The API server is healthy after 5.00138442s
	I0819 16:53:48.021067   18587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 16:53:48.021175   18587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 16:53:48.021246   18587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 16:53:48.021427   18587 kubeadm.go:310] [mark-control-plane] Marking the node addons-825243 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 16:53:48.021480   18587 kubeadm.go:310] [bootstrap-token] Using token: lfkoml.a5tqy6xdm24vx0tr
	I0819 16:53:48.022860   18587 out.go:235]   - Configuring RBAC rules ...
	I0819 16:53:48.022972   18587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 16:53:48.023076   18587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 16:53:48.023210   18587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 16:53:48.023328   18587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 16:53:48.023442   18587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 16:53:48.023517   18587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 16:53:48.023612   18587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 16:53:48.023657   18587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 16:53:48.023701   18587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 16:53:48.023709   18587 kubeadm.go:310] 
	I0819 16:53:48.023757   18587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 16:53:48.023763   18587 kubeadm.go:310] 
	I0819 16:53:48.023847   18587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 16:53:48.023856   18587 kubeadm.go:310] 
	I0819 16:53:48.023901   18587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 16:53:48.023955   18587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 16:53:48.024004   18587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 16:53:48.024010   18587 kubeadm.go:310] 
	I0819 16:53:48.024053   18587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 16:53:48.024059   18587 kubeadm.go:310] 
	I0819 16:53:48.024096   18587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 16:53:48.024102   18587 kubeadm.go:310] 
	I0819 16:53:48.024147   18587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 16:53:48.024209   18587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 16:53:48.024279   18587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 16:53:48.024290   18587 kubeadm.go:310] 
	I0819 16:53:48.024374   18587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 16:53:48.024460   18587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 16:53:48.024474   18587 kubeadm.go:310] 
	I0819 16:53:48.024593   18587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lfkoml.a5tqy6xdm24vx0tr \
	I0819 16:53:48.024741   18587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 16:53:48.024786   18587 kubeadm.go:310] 	--control-plane 
	I0819 16:53:48.024798   18587 kubeadm.go:310] 
	I0819 16:53:48.024882   18587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 16:53:48.024890   18587 kubeadm.go:310] 
	I0819 16:53:48.024961   18587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lfkoml.a5tqy6xdm24vx0tr \
	I0819 16:53:48.025061   18587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 16:53:48.025070   18587 cni.go:84] Creating CNI manager for ""
	I0819 16:53:48.025077   18587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 16:53:48.026496   18587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 16:53:48.027525   18587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 16:53:48.037497   18587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 16:53:48.056679   18587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 16:53:48.056784   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-825243 minikube.k8s.io/updated_at=2024_08_19T16_53_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=addons-825243 minikube.k8s.io/primary=true
	I0819 16:53:48.056787   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:48.163438   18587 ops.go:34] apiserver oom_adj: -16
	I0819 16:53:48.163488   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:48.664052   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:49.163942   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:49.664293   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:50.164584   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:50.664515   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:51.163769   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:51.663609   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:52.163943   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:52.246774   18587 kubeadm.go:1113] duration metric: took 4.190075838s to wait for elevateKubeSystemPrivileges
	I0819 16:53:52.246811   18587 kubeadm.go:394] duration metric: took 14.739210377s to StartCluster
	I0819 16:53:52.246834   18587 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:52.246971   18587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 16:53:52.247390   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:52.247561   18587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 16:53:52.247582   18587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 16:53:52.247656   18587 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 16:53:52.247757   18587 addons.go:69] Setting yakd=true in profile "addons-825243"
	I0819 16:53:52.247773   18587 addons.go:69] Setting ingress=true in profile "addons-825243"
	I0819 16:53:52.247791   18587 addons.go:234] Setting addon yakd=true in "addons-825243"
	I0819 16:53:52.247802   18587 addons.go:234] Setting addon ingress=true in "addons-825243"
	I0819 16:53:52.247800   18587 config.go:182] Loaded profile config "addons-825243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 16:53:52.247795   18587 addons.go:69] Setting registry=true in profile "addons-825243"
	I0819 16:53:52.247815   18587 addons.go:69] Setting ingress-dns=true in profile "addons-825243"
	I0819 16:53:52.247823   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247833   18587 addons.go:234] Setting addon registry=true in "addons-825243"
	I0819 16:53:52.247836   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247798   18587 addons.go:69] Setting inspektor-gadget=true in profile "addons-825243"
	I0819 16:53:52.247850   18587 addons.go:234] Setting addon ingress-dns=true in "addons-825243"
	I0819 16:53:52.247863   18587 addons.go:234] Setting addon inspektor-gadget=true in "addons-825243"
	I0819 16:53:52.247872   18587 addons.go:69] Setting metrics-server=true in profile "addons-825243"
	I0819 16:53:52.247883   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247890   18587 addons.go:234] Setting addon metrics-server=true in "addons-825243"
	I0819 16:53:52.247895   18587 addons.go:69] Setting default-storageclass=true in profile "addons-825243"
	I0819 16:53:52.247901   18587 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-825243"
	I0819 16:53:52.247900   18587 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-825243"
	I0819 16:53:52.247918   18587 addons.go:69] Setting gcp-auth=true in profile "addons-825243"
	I0819 16:53:52.247923   18587 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-825243"
	I0819 16:53:52.247924   18587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-825243"
	I0819 16:53:52.247928   18587 addons.go:69] Setting volumesnapshots=true in profile "addons-825243"
	I0819 16:53:52.247928   18587 addons.go:69] Setting storage-provisioner=true in profile "addons-825243"
	I0819 16:53:52.247935   18587 mustload.go:65] Loading cluster: addons-825243
	I0819 16:53:52.247946   18587 addons.go:234] Setting addon volumesnapshots=true in "addons-825243"
	I0819 16:53:52.247947   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247948   18587 addons.go:234] Setting addon storage-provisioner=true in "addons-825243"
	I0819 16:53:52.247963   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247967   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247982   18587 addons.go:69] Setting helm-tiller=true in profile "addons-825243"
	I0819 16:53:52.248000   18587 addons.go:234] Setting addon helm-tiller=true in "addons-825243"
	I0819 16:53:52.248014   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.248098   18587 config.go:182] Loaded profile config "addons-825243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 16:53:52.248307   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248319   18587 addons.go:69] Setting cloud-spanner=true in profile "addons-825243"
	I0819 16:53:52.248324   18587 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-825243"
	I0819 16:53:52.248344   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248349   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248362   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248362   18587 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-825243"
	I0819 16:53:52.248378   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248380   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248388   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247907   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.248417   18587 addons.go:234] Setting addon cloud-spanner=true in "addons-825243"
	I0819 16:53:52.247919   18587 addons.go:69] Setting volcano=true in profile "addons-825243"
	I0819 16:53:52.248449   18587 addons.go:234] Setting addon volcano=true in "addons-825243"
	I0819 16:53:52.248475   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.248308   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248503   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248688   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248729   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.247919   18587 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-825243"
	I0819 16:53:52.248738   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248764   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.247864   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247890   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.248307   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249034   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248311   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248829   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248731   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249107   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249113   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249079   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249179   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249199   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248475   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.248450   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249266   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249427   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249470   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249449   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249492   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249503   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249515   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.257405   18587 out.go:177] * Verifying Kubernetes components...
	I0819 16:53:52.259192   18587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 16:53:52.269000   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41427
	I0819 16:53:52.269027   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0819 16:53:52.269009   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
	I0819 16:53:52.269414   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.269509   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I0819 16:53:52.269770   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.269956   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.270049   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.270064   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.270089   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.270357   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.270375   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.270434   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.270547   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.270563   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.271007   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.271041   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.271140   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.271336   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.271351   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.271400   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.289700   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.289737   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.290390   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44991
	I0819 16:53:52.290573   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0819 16:53:52.290617   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.290647   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.290686   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I0819 16:53:52.290794   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.291204   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.291237   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.291735   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.291766   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.297535   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.297631   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.297673   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.298235   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.298252   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.298366   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.298375   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.298487   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.298497   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.298676   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.298839   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.299243   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.299265   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.304874   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.305335   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.305369   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.308505   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.308550   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.321276   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
	I0819 16:53:52.322469   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.323142   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.323161   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.323560   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.323764   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.326948   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0819 16:53:52.327130   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35795
	I0819 16:53:52.327552   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.328079   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.328095   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.328431   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.328556   18587 addons.go:234] Setting addon default-storageclass=true in "addons-825243"
	I0819 16:53:52.328614   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.329017   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.329058   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.329063   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.329091   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.329738   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I0819 16:53:52.330170   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.330689   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.330706   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.331045   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.331577   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.331616   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.331811   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0819 16:53:52.332219   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.332687   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.332706   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.333361   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.333435   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.334025   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.334061   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.334949   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0819 16:53:52.335486   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.335961   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.335978   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.336291   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.336441   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.337282   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.337309   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.337767   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.338337   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.338370   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.338540   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.338748   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:53:52.338769   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:53:52.340443   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:53:52.340470   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0819 16:53:52.340493   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:53:52.340513   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:53:52.340524   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:53:52.340533   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:53:52.340715   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:53:52.340742   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 16:53:52.340845   18587 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 16:53:52.341354   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.342029   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.342047   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.342539   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.343110   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.343137   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.348422   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44199
	I0819 16:53:52.349007   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.349552   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.349570   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.349970   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.350204   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.351029   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0819 16:53:52.351183   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0819 16:53:52.351592   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.352036   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.352094   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.352110   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.352495   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.352581   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.352600   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.352700   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.353307   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.353492   18587 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-825243"
	I0819 16:53:52.353531   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.353538   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.353910   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.353966   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.355421   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.356084   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.357877   18587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 16:53:52.357877   18587 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 16:53:52.359254   18587 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 16:53:52.359270   18587 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 16:53:52.359291   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.359442   18587 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 16:53:52.359454   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 16:53:52.359468   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.362785   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37063
	I0819 16:53:52.363091   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.363434   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.363636   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.363673   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.363994   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.364012   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.364077   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.364336   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.364350   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.364376   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.364548   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.364680   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.364700   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.364728   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.364938   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.364983   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.365134   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.365279   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.365397   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.367147   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.369386   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0819 16:53:52.369676   18587 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 16:53:52.370994   18587 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 16:53:52.371011   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 16:53:52.371028   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.371075   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0819 16:53:52.371702   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34981
	I0819 16:53:52.372027   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.372115   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.372906   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.372924   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.373358   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.373432   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34667
	I0819 16:53:52.373588   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.373897   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.374058   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.374075   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.374126   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.374469   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.374520   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.374530   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.374534   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.374545   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.374631   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.374683   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.374857   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.374903   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.375332   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.375370   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.375714   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.375878   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.375979   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.376868   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I0819 16:53:52.377107   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.377129   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.377190   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.377747   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.377771   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.377850   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.378592   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.378720   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0819 16:53:52.379001   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.379076   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.379541   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.379581   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.379629   18587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 16:53:52.379863   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.379989   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.380029   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.380845   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.380871   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34469
	I0819 16:53:52.380879   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.381282   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.381530   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.381848   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.381995   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.382006   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.382187   18587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 16:53:52.382540   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0819 16:53:52.382941   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.383388   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.383362   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.383434   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.383482   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.383519   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.383781   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.383789   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.383942   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.384610   18587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 16:53:52.384707   18587 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 16:53:52.385387   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.385661   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.386016   18587 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 16:53:52.386034   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 16:53:52.386053   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.386319   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.386322   18587 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 16:53:52.386379   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 16:53:52.386395   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.387521   18587 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 16:53:52.388349   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 16:53:52.389202   18587 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 16:53:52.389366   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.389367   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40975
	I0819 16:53:52.389887   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43763
	I0819 16:53:52.389917   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.389984   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.390004   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.390094   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.390206   18587 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 16:53:52.390218   18587 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 16:53:52.390235   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.390262   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.390303   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.390336   18587 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 16:53:52.390468   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.390602   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.390613   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.390732   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.390742   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.390936   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.391560   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 16:53:52.391573   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.391678   18587 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 16:53:52.391687   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 16:53:52.391701   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.391736   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.392969   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.393035   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.393061   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.393078   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.393175   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.393352   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.393561   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.393609   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.393813   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.394111   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.394153   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.394666   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.394738   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.394864   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.395009   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.395101   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.395184   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 16:53:52.395211   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.395888   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 16:53:52.396275   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.397175   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 16:53:52.397297   18587 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 16:53:52.397316   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.397983   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.398495   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.398515   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.398657   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38035
	I0819 16:53:52.398747   18587 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 16:53:52.398811   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.398877   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 16:53:52.399702   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.399707   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.399795   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I0819 16:53:52.399912   18587 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 16:53:52.399929   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.399933   18587 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 16:53:52.399949   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.400051   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.400413   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.400432   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.400504   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.401182   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.401201   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.401252   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.401302   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 16:53:52.401893   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.402098   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.402155   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.402399   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.402417   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.402807   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.402838   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.402987   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0819 16:53:52.403172   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.403332   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.403454   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.403490   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 16:53:52.403500   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.403564   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.403706   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.403963   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.403977   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.404066   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.404362   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.404518   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.404584   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.404607   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.404878   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.405053   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.405212   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.405349   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.405912   18587 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 16:53:52.405962   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 16:53:52.405985   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.406198   18587 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 16:53:52.406213   18587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 16:53:52.406228   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.407572   18587 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 16:53:52.407587   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 16:53:52.407604   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.409008   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 16:53:52.409677   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.410130   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 16:53:52.410149   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 16:53:52.410175   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.410208   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.410180   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.410893   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.411119   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.411384   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.411527   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.411833   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41579
	I0819 16:53:52.412176   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.412364   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.412854   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.412872   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.413049   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.413246   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.413379   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.413490   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.413829   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.413842   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.414149   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.414154   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.414304   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.414552   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.414570   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.414724   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.414876   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.415031   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.415142   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.415941   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.416879   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
	I0819 16:53:52.417217   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.417594   18587 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 16:53:52.417653   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.417667   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.417970   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.418138   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.418807   18587 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 16:53:52.418823   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 16:53:52.418838   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.421739   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.422108   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.422136   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.422316   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.422471   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.422625   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.422751   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.425817   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41911
	I0819 16:53:52.426091   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.426598   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.426617   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.427019   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.427198   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.428619   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.430161   18587 out.go:177]   - Using image docker.io/busybox:stable
	I0819 16:53:52.431431   18587 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 16:53:52.432522   18587 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 16:53:52.432534   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 16:53:52.432545   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.435523   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.435891   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.435916   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.436148   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.436329   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.436485   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.436620   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.723069   18587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 16:53:52.723397   18587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 16:53:52.754991   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 16:53:52.757228   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 16:53:52.778403   18587 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 16:53:52.778427   18587 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 16:53:52.794445   18587 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 16:53:52.794465   18587 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 16:53:52.839148   18587 node_ready.go:35] waiting up to 6m0s for node "addons-825243" to be "Ready" ...
	I0819 16:53:52.842565   18587 node_ready.go:49] node "addons-825243" has status "Ready":"True"
	I0819 16:53:52.842599   18587 node_ready.go:38] duration metric: took 3.407154ms for node "addons-825243" to be "Ready" ...
	I0819 16:53:52.842611   18587 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 16:53:52.842819   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 16:53:52.850460   18587 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace to be "Ready" ...
	I0819 16:53:52.878249   18587 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 16:53:52.878273   18587 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 16:53:52.909030   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 16:53:52.910812   18587 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 16:53:52.910824   18587 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 16:53:52.918852   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 16:53:52.929898   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 16:53:52.929924   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 16:53:52.944770   18587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 16:53:52.944794   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 16:53:52.969924   18587 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 16:53:52.969944   18587 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 16:53:52.978183   18587 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 16:53:52.978209   18587 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 16:53:52.995924   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 16:53:52.997191   18587 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 16:53:52.997214   18587 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 16:53:53.006416   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 16:53:53.095905   18587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 16:53:53.095930   18587 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 16:53:53.139898   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 16:53:53.139920   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 16:53:53.142407   18587 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 16:53:53.142429   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 16:53:53.161616   18587 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 16:53:53.161645   18587 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 16:53:53.194960   18587 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 16:53:53.194990   18587 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 16:53:53.212286   18587 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 16:53:53.212316   18587 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 16:53:53.235584   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 16:53:53.292624   18587 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 16:53:53.292664   18587 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 16:53:53.307662   18587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 16:53:53.307689   18587 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 16:53:53.323029   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 16:53:53.323057   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 16:53:53.359847   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 16:53:53.394497   18587 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 16:53:53.394530   18587 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 16:53:53.418255   18587 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 16:53:53.418285   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 16:53:53.515990   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 16:53:53.520691   18587 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 16:53:53.520714   18587 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 16:53:53.577771   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 16:53:53.586591   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 16:53:53.586618   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 16:53:53.702106   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 16:53:53.702129   18587 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 16:53:53.779788   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 16:53:53.779814   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 16:53:53.806029   18587 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 16:53:53.806051   18587 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 16:53:53.871317   18587 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 16:53:53.871337   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 16:53:53.966201   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 16:53:54.004220   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 16:53:54.004242   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 16:53:54.078714   18587 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 16:53:54.078736   18587 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 16:53:54.377143   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 16:53:54.377165   18587 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 16:53:54.401692   18587 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 16:53:54.401721   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 16:53:54.705088   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 16:53:54.705109   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 16:53:54.745336   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 16:53:54.856481   18587 pod_ready.go:103] pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace has status "Ready":"False"
	I0819 16:53:54.952417   18587 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.228986555s)
	I0819 16:53:54.952456   18587 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 16:53:55.051911   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 16:53:55.051942   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 16:53:55.403064   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 16:53:55.403097   18587 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 16:53:55.463407   18587 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-825243" context rescaled to 1 replicas
	I0819 16:53:55.824922   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 16:53:56.963950   18587 pod_ready.go:103] pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace has status "Ready":"False"
	I0819 16:53:57.099976   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.344955977s)
	I0819 16:53:57.100026   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:53:57.100042   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:53:57.100340   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:53:57.100386   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:53:57.100405   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:53:57.100422   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:53:57.100434   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:53:57.100728   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:53:57.100785   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:53:57.100807   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:53:59.365067   18587 pod_ready.go:103] pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace has status "Ready":"False"
	I0819 16:53:59.454670   18587 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 16:53:59.454707   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:59.457906   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:59.458386   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:59.458411   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:59.458592   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:59.458820   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:59.458973   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:59.459097   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:59.639561   18587 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 16:53:59.675965   18587 addons.go:234] Setting addon gcp-auth=true in "addons-825243"
	I0819 16:53:59.676023   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:59.676331   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:59.676372   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:59.691969   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46025
	I0819 16:53:59.692366   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:59.692955   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:59.692980   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:59.693278   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:59.693730   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:59.693764   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:59.708973   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42439
	I0819 16:53:59.709339   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:59.709777   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:59.709798   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:59.710086   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:59.710269   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:59.711758   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:59.711949   18587 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 16:53:59.711967   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:59.714948   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:59.715390   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:59.715417   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:59.715597   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:59.715767   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:59.715922   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:59.716061   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:54:00.381337   18587 pod_ready.go:93] pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.381357   18587 pod_ready.go:82] duration metric: took 7.530875403s for pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.381366   18587 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-l9wkm" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.437753   18587 pod_ready.go:93] pod "coredns-6f6b679f8f-l9wkm" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.437774   18587 pod_ready.go:82] duration metric: took 56.401881ms for pod "coredns-6f6b679f8f-l9wkm" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.437785   18587 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.459340   18587 pod_ready.go:93] pod "etcd-addons-825243" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.459358   18587 pod_ready.go:82] duration metric: took 21.567726ms for pod "etcd-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.459367   18587 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.462466   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.61961915s)
	I0819 16:54:00.462499   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.553441376s)
	I0819 16:54:00.462515   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462528   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462532   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462545   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462633   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.543759153s)
	I0819 16:54:00.462666   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462678   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462725   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.466764137s)
	I0819 16:54:00.462752   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462754   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.456315324s)
	I0819 16:54:00.462761   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462773   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462782   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462819   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.227208151s)
	I0819 16:54:00.462843   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.102966069s)
	I0819 16:54:00.462849   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462858   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462860   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462868   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462951   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.946935113s)
	I0819 16:54:00.462972   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462981   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463045   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.885241517s)
	I0819 16:54:00.463060   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.463070   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463202   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.49696418s)
	W0819 16:54:00.463224   18587 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 16:54:00.463246   18587 retry.go:31] will retry after 289.430829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 16:54:00.463261   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.463280   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.463290   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.463298   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463346   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.717956752s)
	I0819 16:54:00.463363   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.463374   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463374   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.463383   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.463392   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.463399   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463427   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.463451   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.463458   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.463466   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.463473   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463564   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.463597   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.463605   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.463614   18587 addons.go:475] Verifying addon metrics-server=true in "addons-825243"
	I0819 16:54:00.463647   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.463657   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.463665   18587 addons.go:475] Verifying addon registry=true in "addons-825243"
	I0819 16:54:00.464731   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.464784   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.464793   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.464886   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.464904   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.464913   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.464920   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.465339   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.465375   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.465383   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466202   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466211   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466217   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466222   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466227   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466232   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466235   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466242   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466302   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.466331   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466338   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466346   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466353   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466479   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.466513   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466523   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.709268383s)
	I0819 16:54:00.466533   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466544   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466555   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466599   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.466624   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466634   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466642   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466669   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466706   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.466776   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466785   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466793   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466801   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466881   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.466916   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466924   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466933   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466940   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466987   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.467030   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.467037   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.467046   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.467060   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.467073   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.467122   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.467144   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.467151   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.467323   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.467331   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.467363   18587 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-825243 service yakd-dashboard -n yakd-dashboard
	
	I0819 16:54:00.468680   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.468715   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.468725   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.468858   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.468867   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.468880   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.468885   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.468869   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.468916   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.468925   18587 addons.go:475] Verifying addon ingress=true in "addons-825243"
	I0819 16:54:00.470319   18587 out.go:177] * Verifying ingress addon...
	I0819 16:54:00.470449   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.470446   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.470467   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.471348   18587 out.go:177] * Verifying registry addon...
	I0819 16:54:00.472285   18587 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 16:54:00.473282   18587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 16:54:00.486887   18587 pod_ready.go:93] pod "kube-apiserver-addons-825243" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.486903   18587 pod_ready.go:82] duration metric: took 27.530282ms for pod "kube-apiserver-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.486913   18587 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.496549   18587 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 16:54:00.496570   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:00.498865   18587 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 16:54:00.498881   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:00.522453   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.522475   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.522829   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.522848   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.522860   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 16:54:00.522951   18587 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 16:54:00.530837   18587 pod_ready.go:93] pod "kube-controller-manager-addons-825243" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.530865   18587 pod_ready.go:82] duration metric: took 43.94413ms for pod "kube-controller-manager-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.530878   18587 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dmfp2" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.531140   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.531162   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.531425   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.531484   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.531501   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.752819   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 16:54:00.762033   18587 pod_ready.go:93] pod "kube-proxy-dmfp2" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.762055   18587 pod_ready.go:82] duration metric: took 231.170313ms for pod "kube-proxy-dmfp2" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.762065   18587 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.984631   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:00.984797   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:01.165953   18587 pod_ready.go:93] pod "kube-scheduler-addons-825243" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:01.165976   18587 pod_ready.go:82] duration metric: took 403.904172ms for pod "kube-scheduler-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:01.165986   18587 pod_ready.go:39] duration metric: took 8.323356323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 16:54:01.166005   18587 api_server.go:52] waiting for apiserver process to appear ...
	I0819 16:54:01.166064   18587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 16:54:01.352384   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.527401686s)
	I0819 16:54:01.352409   18587 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.640438079s)
	I0819 16:54:01.352442   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:01.352465   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:01.352764   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:01.352801   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:01.352813   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:01.352824   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:01.352847   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:01.353133   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:01.353148   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:01.353162   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:01.353176   18587 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-825243"
	I0819 16:54:01.354092   18587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 16:54:01.354988   18587 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 16:54:01.356593   18587 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 16:54:01.357287   18587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 16:54:01.357739   18587 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 16:54:01.357753   18587 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 16:54:01.378507   18587 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 16:54:01.378528   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:01.480229   18587 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 16:54:01.480256   18587 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 16:54:01.550553   18587 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 16:54:01.550581   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 16:54:01.608167   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 16:54:01.768518   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:01.769180   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:01.863920   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:01.978773   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:01.979317   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:02.362764   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:02.476765   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:02.480489   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:02.699215   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.946345491s)
	I0819 16:54:02.699244   18587 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.533156738s)
	I0819 16:54:02.699272   18587 api_server.go:72] duration metric: took 10.451668936s to wait for apiserver process to appear ...
	I0819 16:54:02.699280   18587 api_server.go:88] waiting for apiserver healthz status ...
	I0819 16:54:02.699284   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:02.699301   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:02.699304   18587 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0819 16:54:02.699610   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:02.699715   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:02.699734   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:02.699744   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:02.699759   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:02.699986   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:02.700004   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:02.703854   18587 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0819 16:54:02.705134   18587 api_server.go:141] control plane version: v1.31.0
	I0819 16:54:02.705154   18587 api_server.go:131] duration metric: took 5.864705ms to wait for apiserver health ...
	I0819 16:54:02.705162   18587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 16:54:02.719240   18587 system_pods.go:59] 19 kube-system pods found
	I0819 16:54:02.719265   18587 system_pods.go:61] "coredns-6f6b679f8f-g248k" [e5b8dc0c-d315-406d-82d5-c89c95dcd0f5] Running
	I0819 16:54:02.719271   18587 system_pods.go:61] "coredns-6f6b679f8f-l9wkm" [82eb534d-3fdc-4c3f-8789-2617f4507636] Running
	I0819 16:54:02.719277   18587 system_pods.go:61] "csi-hostpath-attacher-0" [70c80be5-ed0a-49fb-b287-3bac65011256] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 16:54:02.719283   18587 system_pods.go:61] "csi-hostpath-resizer-0" [bcbd845e-9dc1-42d3-ac75-15e439c7f9df] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 16:54:02.719289   18587 system_pods.go:61] "csi-hostpathplugin-bnwxn" [fd70584a-3d87-4343-9f83-29d5b98ce25e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 16:54:02.719293   18587 system_pods.go:61] "etcd-addons-825243" [f36e58d0-0cea-4171-a5ad-10ef0212a1ae] Running
	I0819 16:54:02.719297   18587 system_pods.go:61] "kube-apiserver-addons-825243" [3bfce86d-e822-436d-8eb5-11b42d736b53] Running
	I0819 16:54:02.719301   18587 system_pods.go:61] "kube-controller-manager-addons-825243" [27b791c8-efee-40e1-8039-9993e903c434] Running
	I0819 16:54:02.719309   18587 system_pods.go:61] "kube-ingress-dns-minikube" [4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0819 16:54:02.719315   18587 system_pods.go:61] "kube-proxy-dmfp2" [f676c55d-f283-4321-9815-02303a82a9c9] Running
	I0819 16:54:02.719321   18587 system_pods.go:61] "kube-scheduler-addons-825243" [bc4ff467-bf0c-4e8d-aae2-8e2363388539] Running
	I0819 16:54:02.719328   18587 system_pods.go:61] "metrics-server-8988944d9-j2w2h" [ba217649-2efe-4c98-8076-d73d63794bd7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 16:54:02.719337   18587 system_pods.go:61] "nvidia-device-plugin-daemonset-vcml2" [8b9d9981-f3de-4307-9e9f-2ee8621a11c8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0819 16:54:02.719355   18587 system_pods.go:61] "registry-6fb4cdfc84-4g2dz" [eda791b5-556d-4ac5-b370-ea875a1d634a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0819 16:54:02.719370   18587 system_pods.go:61] "registry-proxy-s2gcq" [59c4a419-cfc5-4b2f-964c-8a0b25b0d01c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 16:54:02.719403   18587 system_pods.go:61] "snapshot-controller-56fcc65765-th5xc" [643f4b21-177b-46f0-8d81-5a2fa7141613] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 16:54:02.719417   18587 system_pods.go:61] "snapshot-controller-56fcc65765-w9w56" [b0a5580b-10bf-4aa1-93f5-30ffb08f129e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 16:54:02.719424   18587 system_pods.go:61] "storage-provisioner" [31d6dc33-8567-4b1a-8db4-36f09be7e471] Running
	I0819 16:54:02.719434   18587 system_pods.go:61] "tiller-deploy-b48cc5f79-wr8hg" [f1ed9b9d-e3d1-4e09-b94f-f29a67830f09] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 16:54:02.719444   18587 system_pods.go:74] duration metric: took 14.277028ms to wait for pod list to return data ...
	I0819 16:54:02.719455   18587 default_sa.go:34] waiting for default service account to be created ...
	I0819 16:54:02.728311   18587 default_sa.go:45] found service account: "default"
	I0819 16:54:02.728330   18587 default_sa.go:55] duration metric: took 8.869878ms for default service account to be created ...
	I0819 16:54:02.728339   18587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 16:54:02.739293   18587 system_pods.go:86] 19 kube-system pods found
	I0819 16:54:02.739316   18587 system_pods.go:89] "coredns-6f6b679f8f-g248k" [e5b8dc0c-d315-406d-82d5-c89c95dcd0f5] Running
	I0819 16:54:02.739322   18587 system_pods.go:89] "coredns-6f6b679f8f-l9wkm" [82eb534d-3fdc-4c3f-8789-2617f4507636] Running
	I0819 16:54:02.739328   18587 system_pods.go:89] "csi-hostpath-attacher-0" [70c80be5-ed0a-49fb-b287-3bac65011256] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 16:54:02.739334   18587 system_pods.go:89] "csi-hostpath-resizer-0" [bcbd845e-9dc1-42d3-ac75-15e439c7f9df] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 16:54:02.739341   18587 system_pods.go:89] "csi-hostpathplugin-bnwxn" [fd70584a-3d87-4343-9f83-29d5b98ce25e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 16:54:02.739345   18587 system_pods.go:89] "etcd-addons-825243" [f36e58d0-0cea-4171-a5ad-10ef0212a1ae] Running
	I0819 16:54:02.739349   18587 system_pods.go:89] "kube-apiserver-addons-825243" [3bfce86d-e822-436d-8eb5-11b42d736b53] Running
	I0819 16:54:02.739353   18587 system_pods.go:89] "kube-controller-manager-addons-825243" [27b791c8-efee-40e1-8039-9993e903c434] Running
	I0819 16:54:02.739363   18587 system_pods.go:89] "kube-ingress-dns-minikube" [4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0819 16:54:02.739369   18587 system_pods.go:89] "kube-proxy-dmfp2" [f676c55d-f283-4321-9815-02303a82a9c9] Running
	I0819 16:54:02.739378   18587 system_pods.go:89] "kube-scheduler-addons-825243" [bc4ff467-bf0c-4e8d-aae2-8e2363388539] Running
	I0819 16:54:02.739387   18587 system_pods.go:89] "metrics-server-8988944d9-j2w2h" [ba217649-2efe-4c98-8076-d73d63794bd7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 16:54:02.739392   18587 system_pods.go:89] "nvidia-device-plugin-daemonset-vcml2" [8b9d9981-f3de-4307-9e9f-2ee8621a11c8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0819 16:54:02.739405   18587 system_pods.go:89] "registry-6fb4cdfc84-4g2dz" [eda791b5-556d-4ac5-b370-ea875a1d634a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0819 16:54:02.739411   18587 system_pods.go:89] "registry-proxy-s2gcq" [59c4a419-cfc5-4b2f-964c-8a0b25b0d01c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 16:54:02.739415   18587 system_pods.go:89] "snapshot-controller-56fcc65765-th5xc" [643f4b21-177b-46f0-8d81-5a2fa7141613] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 16:54:02.739421   18587 system_pods.go:89] "snapshot-controller-56fcc65765-w9w56" [b0a5580b-10bf-4aa1-93f5-30ffb08f129e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 16:54:02.739424   18587 system_pods.go:89] "storage-provisioner" [31d6dc33-8567-4b1a-8db4-36f09be7e471] Running
	I0819 16:54:02.739431   18587 system_pods.go:89] "tiller-deploy-b48cc5f79-wr8hg" [f1ed9b9d-e3d1-4e09-b94f-f29a67830f09] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 16:54:02.739440   18587 system_pods.go:126] duration metric: took 11.096419ms to wait for k8s-apps to be running ...
	I0819 16:54:02.739446   18587 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 16:54:02.739492   18587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 16:54:02.888230   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:03.008723   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:03.008808   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:03.335325   18587 system_svc.go:56] duration metric: took 595.862944ms WaitForService to wait for kubelet
	I0819 16:54:03.335360   18587 kubeadm.go:582] duration metric: took 11.087754239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 16:54:03.335386   18587 node_conditions.go:102] verifying NodePressure condition ...
	I0819 16:54:03.337969   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.729764964s)
	I0819 16:54:03.338013   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:03.338030   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:03.338294   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:03.338313   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:03.338344   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:03.338363   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:03.338371   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:03.338619   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:03.338626   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:03.338635   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:03.340206   18587 addons.go:475] Verifying addon gcp-auth=true in "addons-825243"
	I0819 16:54:03.342722   18587 out.go:177] * Verifying gcp-auth addon...
	I0819 16:54:03.344887   18587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 16:54:03.350473   18587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 16:54:03.350494   18587 node_conditions.go:123] node cpu capacity is 2
	I0819 16:54:03.350505   18587 node_conditions.go:105] duration metric: took 15.114103ms to run NodePressure ...
	I0819 16:54:03.350517   18587 start.go:241] waiting for startup goroutines ...
	I0819 16:54:03.365339   18587 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 16:54:03.365362   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:03.413902   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:03.478299   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:03.482404   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:03.849299   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:03.861961   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:03.978959   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:03.979904   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:04.349073   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:04.362259   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:04.478646   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:04.480578   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:04.848736   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:04.862578   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:04.976961   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:04.977413   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:05.348296   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:05.363643   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:05.477150   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:05.477249   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:05.848928   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:05.861712   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:05.976678   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:05.976864   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:06.349245   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:06.361613   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:06.477924   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:06.479113   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:06.852404   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:06.861997   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:06.976850   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:06.977777   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:07.430169   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:07.432784   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:07.530761   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:07.531136   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:07.848721   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:07.862284   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:07.977281   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:07.977583   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:08.348682   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:08.362411   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:08.477531   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:08.477608   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:08.848287   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:08.861901   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:08.977231   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:08.977517   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:09.349372   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:09.362911   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:09.526235   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:09.526612   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:09.851692   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:09.861357   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:09.977006   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:09.977010   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:10.348891   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:10.361288   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:10.476476   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:10.477217   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:10.848287   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:10.862420   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:10.976092   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:10.976533   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:11.349017   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:11.361213   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:11.476154   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:11.477390   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:11.849657   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:11.862417   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:11.977148   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:11.977819   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:12.348865   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:12.361084   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:12.477694   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:12.478059   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:12.848462   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:12.861660   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:12.977556   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:12.978196   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:13.348863   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:13.361205   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:13.476099   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:13.476783   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:13.848821   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:13.862369   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:13.977851   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:13.978006   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:14.349555   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:14.368647   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:14.476995   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:14.477034   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:14.848764   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:14.860957   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:14.976918   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:14.977447   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:15.347923   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:15.361401   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:15.477183   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:15.477410   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:15.848434   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:15.862315   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:15.976612   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:15.977522   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:16.348955   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:16.361470   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:16.477481   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:16.478306   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:16.848764   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:16.861827   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:16.977874   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:16.978252   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:17.350055   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:17.361217   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:17.475993   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:17.477707   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:17.850494   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:17.863325   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:17.977644   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:17.978279   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:18.348403   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:18.362280   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:18.476881   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:18.477345   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:18.848185   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:18.861841   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:18.978312   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:18.979010   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:19.348269   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:19.361892   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:19.476994   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:19.477325   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:19.848378   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:19.862206   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:19.976101   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:19.976728   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:20.351889   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:20.632575   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:20.632951   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:20.634241   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:20.849009   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:20.861837   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:20.976633   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:20.976659   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:21.348672   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:21.361526   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:21.476657   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:21.477842   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:21.849336   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:21.861778   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:21.976973   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:21.977507   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:22.348780   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:22.362192   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:22.476364   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:22.477486   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:22.848199   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:22.861198   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:22.976628   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:22.977088   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:23.349426   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:23.362095   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:23.476869   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:23.477941   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:23.849406   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:23.861851   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:23.978315   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:23.979006   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:24.348541   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:24.362533   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:24.477090   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:24.477222   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:24.849086   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:24.861321   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:24.977205   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:24.977958   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:25.348531   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:25.361894   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:25.477624   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:25.478207   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:25.887516   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:25.888243   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:25.986822   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:25.987216   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:26.349513   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:26.361909   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:26.477759   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:26.477882   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:26.849509   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:26.864663   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:26.979105   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:26.979236   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:27.348307   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:27.361447   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:27.478153   18587 kapi.go:107] duration metric: took 27.004865454s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 16:54:27.478342   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:27.848340   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:27.862259   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:27.976892   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:28.348659   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:28.361236   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:28.488928   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:28.848674   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:28.861722   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:28.977762   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:29.349185   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:29.362202   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:29.476654   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:29.848919   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:29.862623   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:30.098577   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:30.348040   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:30.361955   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:30.477142   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:30.849329   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:30.861881   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:30.977100   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:31.349162   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:31.361505   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:31.483453   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:31.848071   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:31.861910   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:31.977298   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:32.348304   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:32.361256   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:32.476121   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:32.849713   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:32.861851   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:32.976857   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:33.348312   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:33.362033   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:33.477362   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:33.848996   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:33.861733   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:33.976438   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:34.349075   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:34.361576   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:34.476401   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:34.850733   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:34.862562   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:34.978663   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:35.536950   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:35.537948   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:35.538637   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:35.848061   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:35.861910   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:35.976231   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:36.349232   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:36.361270   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:36.476130   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:36.850536   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:36.862411   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:36.976525   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:37.349006   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:37.362513   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:37.476671   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:37.848395   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:37.861530   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:37.976058   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:38.348848   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:38.360918   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:38.476700   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:38.848673   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:38.861139   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:38.975930   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:39.348621   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:39.360915   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:39.480310   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:39.849412   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:39.861293   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:39.976951   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:40.349149   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:40.362390   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:40.476625   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:40.848499   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:40.862169   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:40.976009   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:41.349511   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:41.362034   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:41.477494   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:41.849101   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:41.864934   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:41.976912   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:42.349241   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:42.361420   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:42.476183   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:42.848038   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:42.861251   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:42.976326   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:43.349375   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:43.362479   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:43.478153   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:43.848937   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:43.862105   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:43.975858   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:44.349057   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:44.361694   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:44.476332   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:44.851009   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:44.861711   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:44.977105   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:45.349029   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:45.361320   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:45.476267   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:45.848968   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:45.861514   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:45.976778   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:46.348066   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:46.361197   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:46.475899   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:46.848665   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:46.861521   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:46.976801   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:47.348985   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:47.361812   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:47.476497   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:47.848078   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:47.861607   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:47.976240   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:48.549844   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:48.550127   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:48.550286   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:48.849134   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:48.861674   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:48.976098   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:49.349182   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:49.364606   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:49.476143   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:49.848868   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:49.861534   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:49.976275   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:50.348082   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:50.361482   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:50.476630   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:50.849336   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:50.862490   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:50.976773   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:51.348135   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:51.361888   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:51.476436   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:51.847956   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:51.862100   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:51.977030   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:52.349451   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:52.362751   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:52.476721   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:52.848883   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:52.862754   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:52.976909   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:53.349036   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:53.361229   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:53.477172   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:53.848257   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:53.862038   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:53.976148   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:54.354039   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:54.361935   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:54.477395   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:54.849292   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:54.861781   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:54.976713   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:55.348229   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:55.362045   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:55.477017   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:55.848361   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:55.862482   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:55.977603   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:56.348736   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:56.363337   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:56.476931   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:56.849054   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:56.861649   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:56.976203   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:57.350018   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:57.361693   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:57.490201   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:57.849760   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:57.861096   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:57.976468   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:58.348542   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:58.363216   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:58.476189   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:58.848848   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:58.861195   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:58.976649   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:59.353565   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:59.362195   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:59.477547   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:59.849213   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:59.861622   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:59.978416   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:00.354004   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:00.369335   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:00.481924   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:00.849902   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:00.952433   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:00.978889   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:01.353524   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:01.364473   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:01.478847   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:01.849483   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:01.862954   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:01.977387   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:02.348096   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:02.361469   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:02.477058   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:02.849284   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:02.862115   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:02.976305   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:03.348884   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:03.361965   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:03.477810   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:03.848375   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:03.861703   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:03.976859   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:04.348245   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:04.361918   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:04.477922   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:04.849254   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:04.862091   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:04.976121   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:05.349366   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:05.362017   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:05.482590   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:05.849334   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:05.861679   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:05.976722   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:06.349394   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:06.362430   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:06.477037   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:06.851891   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:06.862727   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:06.976587   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:07.348866   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:07.362607   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:07.477905   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:07.849486   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:07.863200   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:07.976860   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:08.355353   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:08.361526   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:08.484384   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:08.849145   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:08.862251   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:08.976041   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:09.348483   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:09.361780   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:09.477297   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:09.849049   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:09.861526   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:09.976830   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:10.348474   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:10.362177   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:10.477687   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:10.848970   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:10.861488   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:10.976829   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:11.395330   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:11.395521   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:11.476890   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:11.848235   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:11.861672   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:11.976609   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:12.348719   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:12.361837   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:12.475757   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:12.849595   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:12.861591   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:12.976957   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:13.348641   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:13.362693   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:13.476157   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:13.848352   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:13.862236   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:13.977026   18587 kapi.go:107] duration metric: took 1m13.504739338s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 16:55:14.690676   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:14.693174   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:14.851002   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:14.865074   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:15.349109   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:15.362383   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:15.849324   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:15.862349   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:16.349245   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:16.361899   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:16.848912   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:16.863741   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:17.349364   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:17.361990   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:17.847894   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:17.861817   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:18.353484   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:18.453824   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:18.849305   18587 kapi.go:107] duration metric: took 1m15.504411683s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 16:55:18.851390   18587 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-825243 cluster.
	I0819 16:55:18.852862   18587 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 16:55:18.854336   18587 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 16:55:18.861986   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:19.362152   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:19.861867   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:20.361993   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:20.862951   18587 kapi.go:107] duration metric: took 1m19.505661731s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 16:55:20.865036   18587 out.go:177] * Enabled addons: storage-provisioner, metrics-server, helm-tiller, nvidia-device-plugin, ingress-dns, inspektor-gadget, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0819 16:55:20.866365   18587 addons.go:510] duration metric: took 1m28.618714412s for enable addons: enabled=[storage-provisioner metrics-server helm-tiller nvidia-device-plugin ingress-dns inspektor-gadget cloud-spanner yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0819 16:55:20.866418   18587 start.go:246] waiting for cluster config update ...
	I0819 16:55:20.866447   18587 start.go:255] writing updated cluster config ...
	I0819 16:55:20.866708   18587 ssh_runner.go:195] Run: rm -f paused
	I0819 16:55:20.920180   18587 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 16:55:20.921946   18587 out.go:177] * Done! kubectl is now configured to use "addons-825243" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.395777848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086736395752892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e0021e7-afde-44d4-af4f-6c315db32787 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.396453594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fb2618c-c5ef-4a40-9f1c-8b9d6d3bf25b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.396505742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fb2618c-c5ef-4a40-9f1c-8b9d6d3bf25b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.396793548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d6a6fb58a0bf0b43a92e5322399d1a19dafeb6e498f0c9e58661c6de12af12,PodSandboxId:033facd3d0d8cb5888ed94fa611c0364fe61163dedfa63767240ff925b85dda8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724086729034496506,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pxx9b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b732b6c9-1421-444d-a5a9-4833f92b61cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fbd6bf19018638b169116055c3c23fc79ddd75fd2b524aa21a841135777b49,PodSandboxId:f12ced1940e7b82a3bfb6e748ae9b47d927c7f8a85ba83c0b2491bdefcc1d762,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724086587228393318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 150e23cd-36ab-477d-80fd-445d04acef1c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d095ab106d7f728e34f06e9c289ca333169bd3e475c7b926b60ad2398cbd8ce9,PodSandboxId:5ce648696b6e9b360aed273b465e8e20a2de124de31690617b21358659442ec3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724086524469145582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1f2e243-f50e-4c15-9
af8-eb7e16ad81ee,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056e88818e0c0afdfcfb6359120a964237fd6b453c440d6b7f39ce7b079dba74,PodSandboxId:128d8ffcd60b912eef2cfc5c1302a2a921e5ee85136e7d914246bb4db09f17bc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724086494309653032,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fkfjb,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: ddf40bd3-9401-4adf-b1e4-89534f5cabef,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e29e6ba0dc1ed854506433fdff83ae26712320ce9110d816347852c68e0428,PodSandboxId:403c431d429f7526002b3398b76e5df3c8f09d931afb8dac6ac6b137f05cd957,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724086494164680904,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pw4qq,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9b514a8-0371-46f6-82fb-6413d9fd797f,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3644cc9fe9230b2a6fc4e2aa138fa0d229cad155d5925e4cfae0e3f7eb9cdb,PodSandboxId:5e06970da9213dd3258247eab954425a912f79b942566f31b5b209ce42b67abd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724086484813539851,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-jfc4v,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d40a9d0c-12eb-4055-8b46-06fd1543bc68,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fffb929a00d1dcb6b5451c692f58ce39023f31c2c6ea1016aefca4355f4dc,PodSandboxId:e4a4fd6a630213bbf44eaec03fa7ef0cf4b9ec261e452487f662c68840953626,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724086479475272303,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-j2w2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba217649-2efe-4c98-8076-d73d63794bd7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2450e2dc00595f2c4df9fdc3ac7142bf1da96bfe8897e1e713f62ad78811ce,PodSandboxId:d0103098f1809fed076ff6a12b7397da39430b740ffad687467f083452004442,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724086438642165959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d6dc33-8567-4b1a-8db4-36f09be7e471,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72decfaa40679a399c79e093e6c70273882962ba10ed91b35a807e0c62b4bed,PodSandboxId:3b802b9a05eb2f0a3f9cd8e16b4b5b0e0494f4200f6c5126daf32dde49857daf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724086435807157726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g248k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dc0c-d315-406d-82d5-c89c95dcd0f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93ec25eebd6003b585e9fd7a83f22315b4628b439ea0750f1748f6f854225c9,PodSandboxId:2092262a8f5e0b214d134ca325cce531ff6a34aa9cb77f4b47134c5b2fd3d068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724086433626604073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmfp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f676c55d-f283-4321-9815-02303a82a9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59baf8452639b8bfefbbb03c5b991b5e1e2045846e7cf24bf6295732f0e5c7ad,PodSandboxId:56a7a29e8b717b4cf95c6af87b42cdb3c11ce50aa1c232842b5c652d25daddb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724086422047914813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02d74c94d5fa76bc339c924874ff82c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4daf922ea6fc225852bbf19f290e3bcfc08e9c0f44712edb28e0dc60762d501,PodSandboxId:e5b1c9dad8266c5208a6793f40161ee2e2351a132678bee169f48a6eb0ff8ab2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724086422050220924,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c73413c40ce740b1be5c2b9eb143a86,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b65e02148e4a4beb939468541528052b6c74af3ed0680117121fbb8303f48,PodSandboxId:57ad90b76c83c6455b8be897d522327afa6581aee2d00e1a6abb75507286ce39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724086422012572597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b841347cb105acbb91d9c9b4c5a951,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58ad92a674ccd49ed1ac6c6762b72efdc7e1a134037596a9bb9ab6eb77a2c81,PodSandboxId:cd96040973c29ee9450ed700b01f466721880994edeb0d341c5a1be66c116404,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724086421976516809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b021647473e121728e60e652f49fb2bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fb2618c-c5ef-4a40-9f1c-8b9d6d3bf25b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.431264428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea877f43-a33b-49d0-8bd2-983c4fd1d8c1 name=/runtime.v1.RuntimeService/Version
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.431344393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea877f43-a33b-49d0-8bd2-983c4fd1d8c1 name=/runtime.v1.RuntimeService/Version
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.432394333Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acd8dd19-7f30-45f4-bb65-1dc8f074e56b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.433768658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086736433742324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acd8dd19-7f30-45f4-bb65-1dc8f074e56b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.434234491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=185db0c9-6e3f-4dfa-be4f-4815116330d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.434303277Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=185db0c9-6e3f-4dfa-be4f-4815116330d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.434590824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d6a6fb58a0bf0b43a92e5322399d1a19dafeb6e498f0c9e58661c6de12af12,PodSandboxId:033facd3d0d8cb5888ed94fa611c0364fe61163dedfa63767240ff925b85dda8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724086729034496506,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pxx9b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b732b6c9-1421-444d-a5a9-4833f92b61cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fbd6bf19018638b169116055c3c23fc79ddd75fd2b524aa21a841135777b49,PodSandboxId:f12ced1940e7b82a3bfb6e748ae9b47d927c7f8a85ba83c0b2491bdefcc1d762,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724086587228393318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 150e23cd-36ab-477d-80fd-445d04acef1c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d095ab106d7f728e34f06e9c289ca333169bd3e475c7b926b60ad2398cbd8ce9,PodSandboxId:5ce648696b6e9b360aed273b465e8e20a2de124de31690617b21358659442ec3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724086524469145582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1f2e243-f50e-4c15-9
af8-eb7e16ad81ee,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056e88818e0c0afdfcfb6359120a964237fd6b453c440d6b7f39ce7b079dba74,PodSandboxId:128d8ffcd60b912eef2cfc5c1302a2a921e5ee85136e7d914246bb4db09f17bc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724086494309653032,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fkfjb,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: ddf40bd3-9401-4adf-b1e4-89534f5cabef,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e29e6ba0dc1ed854506433fdff83ae26712320ce9110d816347852c68e0428,PodSandboxId:403c431d429f7526002b3398b76e5df3c8f09d931afb8dac6ac6b137f05cd957,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724086494164680904,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pw4qq,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9b514a8-0371-46f6-82fb-6413d9fd797f,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3644cc9fe9230b2a6fc4e2aa138fa0d229cad155d5925e4cfae0e3f7eb9cdb,PodSandboxId:5e06970da9213dd3258247eab954425a912f79b942566f31b5b209ce42b67abd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724086484813539851,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-jfc4v,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d40a9d0c-12eb-4055-8b46-06fd1543bc68,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fffb929a00d1dcb6b5451c692f58ce39023f31c2c6ea1016aefca4355f4dc,PodSandboxId:e4a4fd6a630213bbf44eaec03fa7ef0cf4b9ec261e452487f662c68840953626,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724086479475272303,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-j2w2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba217649-2efe-4c98-8076-d73d63794bd7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2450e2dc00595f2c4df9fdc3ac7142bf1da96bfe8897e1e713f62ad78811ce,PodSandboxId:d0103098f1809fed076ff6a12b7397da39430b740ffad687467f083452004442,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724086438642165959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d6dc33-8567-4b1a-8db4-36f09be7e471,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72decfaa40679a399c79e093e6c70273882962ba10ed91b35a807e0c62b4bed,PodSandboxId:3b802b9a05eb2f0a3f9cd8e16b4b5b0e0494f4200f6c5126daf32dde49857daf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724086435807157726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g248k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dc0c-d315-406d-82d5-c89c95dcd0f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93ec25eebd6003b585e9fd7a83f22315b4628b439ea0750f1748f6f854225c9,PodSandboxId:2092262a8f5e0b214d134ca325cce531ff6a34aa9cb77f4b47134c5b2fd3d068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724086433626604073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmfp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f676c55d-f283-4321-9815-02303a82a9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59baf8452639b8bfefbbb03c5b991b5e1e2045846e7cf24bf6295732f0e5c7ad,PodSandboxId:56a7a29e8b717b4cf95c6af87b42cdb3c11ce50aa1c232842b5c652d25daddb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724086422047914813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02d74c94d5fa76bc339c924874ff82c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4daf922ea6fc225852bbf19f290e3bcfc08e9c0f44712edb28e0dc60762d501,PodSandboxId:e5b1c9dad8266c5208a6793f40161ee2e2351a132678bee169f48a6eb0ff8ab2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724086422050220924,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c73413c40ce740b1be5c2b9eb143a86,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b65e02148e4a4beb939468541528052b6c74af3ed0680117121fbb8303f48,PodSandboxId:57ad90b76c83c6455b8be897d522327afa6581aee2d00e1a6abb75507286ce39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724086422012572597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b841347cb105acbb91d9c9b4c5a951,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58ad92a674ccd49ed1ac6c6762b72efdc7e1a134037596a9bb9ab6eb77a2c81,PodSandboxId:cd96040973c29ee9450ed700b01f466721880994edeb0d341c5a1be66c116404,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724086421976516809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b021647473e121728e60e652f49fb2bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=185db0c9-6e3f-4dfa-be4f-4815116330d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.469355201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2af8ff26-35db-46c2-9350-b1d2e30b28be name=/runtime.v1.RuntimeService/Version
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.469451291Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2af8ff26-35db-46c2-9350-b1d2e30b28be name=/runtime.v1.RuntimeService/Version
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.470708325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75c3e03e-5177-4a32-bdf0-9a30549c5e6d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.471943645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086736471911602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75c3e03e-5177-4a32-bdf0-9a30549c5e6d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.472938189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c2670dd-4d53-4825-b5d7-6e1d65e234ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.473018190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c2670dd-4d53-4825-b5d7-6e1d65e234ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.473308718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d6a6fb58a0bf0b43a92e5322399d1a19dafeb6e498f0c9e58661c6de12af12,PodSandboxId:033facd3d0d8cb5888ed94fa611c0364fe61163dedfa63767240ff925b85dda8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724086729034496506,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pxx9b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b732b6c9-1421-444d-a5a9-4833f92b61cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fbd6bf19018638b169116055c3c23fc79ddd75fd2b524aa21a841135777b49,PodSandboxId:f12ced1940e7b82a3bfb6e748ae9b47d927c7f8a85ba83c0b2491bdefcc1d762,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724086587228393318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 150e23cd-36ab-477d-80fd-445d04acef1c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d095ab106d7f728e34f06e9c289ca333169bd3e475c7b926b60ad2398cbd8ce9,PodSandboxId:5ce648696b6e9b360aed273b465e8e20a2de124de31690617b21358659442ec3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724086524469145582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1f2e243-f50e-4c15-9
af8-eb7e16ad81ee,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056e88818e0c0afdfcfb6359120a964237fd6b453c440d6b7f39ce7b079dba74,PodSandboxId:128d8ffcd60b912eef2cfc5c1302a2a921e5ee85136e7d914246bb4db09f17bc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724086494309653032,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fkfjb,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: ddf40bd3-9401-4adf-b1e4-89534f5cabef,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e29e6ba0dc1ed854506433fdff83ae26712320ce9110d816347852c68e0428,PodSandboxId:403c431d429f7526002b3398b76e5df3c8f09d931afb8dac6ac6b137f05cd957,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724086494164680904,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pw4qq,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9b514a8-0371-46f6-82fb-6413d9fd797f,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3644cc9fe9230b2a6fc4e2aa138fa0d229cad155d5925e4cfae0e3f7eb9cdb,PodSandboxId:5e06970da9213dd3258247eab954425a912f79b942566f31b5b209ce42b67abd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724086484813539851,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-jfc4v,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d40a9d0c-12eb-4055-8b46-06fd1543bc68,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fffb929a00d1dcb6b5451c692f58ce39023f31c2c6ea1016aefca4355f4dc,PodSandboxId:e4a4fd6a630213bbf44eaec03fa7ef0cf4b9ec261e452487f662c68840953626,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724086479475272303,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-j2w2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba217649-2efe-4c98-8076-d73d63794bd7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2450e2dc00595f2c4df9fdc3ac7142bf1da96bfe8897e1e713f62ad78811ce,PodSandboxId:d0103098f1809fed076ff6a12b7397da39430b740ffad687467f083452004442,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724086438642165959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d6dc33-8567-4b1a-8db4-36f09be7e471,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72decfaa40679a399c79e093e6c70273882962ba10ed91b35a807e0c62b4bed,PodSandboxId:3b802b9a05eb2f0a3f9cd8e16b4b5b0e0494f4200f6c5126daf32dde49857daf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724086435807157726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g248k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dc0c-d315-406d-82d5-c89c95dcd0f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93ec25eebd6003b585e9fd7a83f22315b4628b439ea0750f1748f6f854225c9,PodSandboxId:2092262a8f5e0b214d134ca325cce531ff6a34aa9cb77f4b47134c5b2fd3d068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724086433626604073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmfp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f676c55d-f283-4321-9815-02303a82a9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59baf8452639b8bfefbbb03c5b991b5e1e2045846e7cf24bf6295732f0e5c7ad,PodSandboxId:56a7a29e8b717b4cf95c6af87b42cdb3c11ce50aa1c232842b5c652d25daddb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724086422047914813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02d74c94d5fa76bc339c924874ff82c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4daf922ea6fc225852bbf19f290e3bcfc08e9c0f44712edb28e0dc60762d501,PodSandboxId:e5b1c9dad8266c5208a6793f40161ee2e2351a132678bee169f48a6eb0ff8ab2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724086422050220924,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c73413c40ce740b1be5c2b9eb143a86,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b65e02148e4a4beb939468541528052b6c74af3ed0680117121fbb8303f48,PodSandboxId:57ad90b76c83c6455b8be897d522327afa6581aee2d00e1a6abb75507286ce39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724086422012572597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b841347cb105acbb91d9c9b4c5a951,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58ad92a674ccd49ed1ac6c6762b72efdc7e1a134037596a9bb9ab6eb77a2c81,PodSandboxId:cd96040973c29ee9450ed700b01f466721880994edeb0d341c5a1be66c116404,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724086421976516809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b021647473e121728e60e652f49fb2bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c2670dd-4d53-4825-b5d7-6e1d65e234ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.504968087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88e3bc88-bd99-4ad8-904e-7c2c2aed4e88 name=/runtime.v1.RuntimeService/Version
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.505078627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88e3bc88-bd99-4ad8-904e-7c2c2aed4e88 name=/runtime.v1.RuntimeService/Version
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.506397275Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=093d8b54-c599-4aa6-a022-a16c96d770cd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.508305569Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086736508276080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=093d8b54-c599-4aa6-a022-a16c96d770cd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.508943071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc55ea24-1705-4259-a04a-169fe860fb0f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.508998923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc55ea24-1705-4259-a04a-169fe860fb0f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 16:58:56 addons-825243 crio[678]: time="2024-08-19 16:58:56.509516861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d6a6fb58a0bf0b43a92e5322399d1a19dafeb6e498f0c9e58661c6de12af12,PodSandboxId:033facd3d0d8cb5888ed94fa611c0364fe61163dedfa63767240ff925b85dda8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724086729034496506,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pxx9b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b732b6c9-1421-444d-a5a9-4833f92b61cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fbd6bf19018638b169116055c3c23fc79ddd75fd2b524aa21a841135777b49,PodSandboxId:f12ced1940e7b82a3bfb6e748ae9b47d927c7f8a85ba83c0b2491bdefcc1d762,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724086587228393318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 150e23cd-36ab-477d-80fd-445d04acef1c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d095ab106d7f728e34f06e9c289ca333169bd3e475c7b926b60ad2398cbd8ce9,PodSandboxId:5ce648696b6e9b360aed273b465e8e20a2de124de31690617b21358659442ec3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724086524469145582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1f2e243-f50e-4c15-9
af8-eb7e16ad81ee,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056e88818e0c0afdfcfb6359120a964237fd6b453c440d6b7f39ce7b079dba74,PodSandboxId:128d8ffcd60b912eef2cfc5c1302a2a921e5ee85136e7d914246bb4db09f17bc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724086494309653032,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fkfjb,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: ddf40bd3-9401-4adf-b1e4-89534f5cabef,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e29e6ba0dc1ed854506433fdff83ae26712320ce9110d816347852c68e0428,PodSandboxId:403c431d429f7526002b3398b76e5df3c8f09d931afb8dac6ac6b137f05cd957,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724086494164680904,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pw4qq,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c9b514a8-0371-46f6-82fb-6413d9fd797f,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3644cc9fe9230b2a6fc4e2aa138fa0d229cad155d5925e4cfae0e3f7eb9cdb,PodSandboxId:5e06970da9213dd3258247eab954425a912f79b942566f31b5b209ce42b67abd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724086484813539851,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-jfc4v,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d40a9d0c-12eb-4055-8b46-06fd1543bc68,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fffb929a00d1dcb6b5451c692f58ce39023f31c2c6ea1016aefca4355f4dc,PodSandboxId:e4a4fd6a630213bbf44eaec03fa7ef0cf4b9ec261e452487f662c68840953626,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724086479475272303,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-j2w2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba217649-2efe-4c98-8076-d73d63794bd7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2450e2dc00595f2c4df9fdc3ac7142bf1da96bfe8897e1e713f62ad78811ce,PodSandboxId:d0103098f1809fed076ff6a12b7397da39430b740ffad687467f083452004442,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724086438642165959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d6dc33-8567-4b1a-8db4-36f09be7e471,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72decfaa40679a399c79e093e6c70273882962ba10ed91b35a807e0c62b4bed,PodSandboxId:3b802b9a05eb2f0a3f9cd8e16b4b5b0e0494f4200f6c5126daf32dde49857daf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1724086435807157726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g248k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dc0c-d315-406d-82d5-c89c95dcd0f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93ec25eebd6003b585e9fd7a83f22315b4628b439ea0750f1748f6f854225c9,PodSandboxId:2092262a8f5e0b214d134ca325cce531ff6a34aa9cb77f4b47134c5b2fd3d068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724086433626604073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmfp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f676c55d-f283-4321-9815-02303a82a9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59baf8452639b8bfefbbb03c5b991b5e1e2045846e7cf24bf6295732f0e5c7ad,PodSandboxId:56a7a29e8b717b4cf95c6af87b42cdb3c11ce50aa1c232842b5c652d25daddb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724086422047914813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02d74c94d5fa76bc339c924874ff82c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4daf922ea6fc225852bbf19f290e3bcfc08e9c0f44712edb28e0dc60762d501,PodSandboxId:e5b1c9dad8266c5208a6793f40161ee2e2351a132678bee169f48a6eb0ff8ab2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724086422050220924,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c73413c40ce740b1be5c2b9eb143a86,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b65e02148e4a4beb939468541528052b6c74af3ed0680117121fbb8303f48,PodSandboxId:57ad90b76c83c6455b8be897d522327afa6581aee2d00e1a6abb75507286ce39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724086422012572597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b841347cb105acbb91d9c9b4c5a951,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58ad92a674ccd49ed1ac6c6762b72efdc7e1a134037596a9bb9ab6eb77a2c81,PodSandboxId:cd96040973c29ee9450ed700b01f466721880994edeb0d341c5a1be66c116404,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724086421976516809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b021647473e121728e60e652f49fb2bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc55ea24-1705-4259-a04a-169fe860fb0f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	92d6a6fb58a0b       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   033facd3d0d8c       hello-world-app-55bf9c44b4-pxx9b
	75fbd6bf19018       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   f12ced1940e7b       nginx
	d095ab106d7f7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   5ce648696b6e9       busybox
	056e88818e0c0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   4 minutes ago       Exited              patch                     0                   128d8ffcd60b9       ingress-nginx-admission-patch-fkfjb
	44e29e6ba0dc1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   4 minutes ago       Exited              create                    0                   403c431d429f7       ingress-nginx-admission-create-pw4qq
	7e3644cc9fe92       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   5e06970da9213       local-path-provisioner-86d989889c-jfc4v
	f69fffb929a00       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   e4a4fd6a63021       metrics-server-8988944d9-j2w2h
	6c2450e2dc005       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   d0103098f1809       storage-provisioner
	d72decfaa4067       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   3b802b9a05eb2       coredns-6f6b679f8f-g248k
	a93ec25eebd60       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   2092262a8f5e0       kube-proxy-dmfp2
	b4daf922ea6fc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   e5b1c9dad8266       etcd-addons-825243
	59baf8452639b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   56a7a29e8b717       kube-scheduler-addons-825243
	0e6b65e02148e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   57ad90b76c83c       kube-controller-manager-addons-825243
	d58ad92a674cc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   cd96040973c29       kube-apiserver-addons-825243
	
	
	==> coredns [d72decfaa40679a399c79e093e6c70273882962ba10ed91b35a807e0c62b4bed] <==
	[INFO] 10.244.0.7:59064 - 57053 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000252625s
	[INFO] 10.244.0.7:57627 - 28620 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107087s
	[INFO] 10.244.0.7:57627 - 64718 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122167s
	[INFO] 10.244.0.7:40848 - 34389 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000151905s
	[INFO] 10.244.0.7:40848 - 39767 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111358s
	[INFO] 10.244.0.7:39316 - 45051 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164496s
	[INFO] 10.244.0.7:39316 - 50685 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127743s
	[INFO] 10.244.0.7:43887 - 3053 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000202588s
	[INFO] 10.244.0.7:43887 - 30184 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000268326s
	[INFO] 10.244.0.7:55844 - 39835 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074609s
	[INFO] 10.244.0.7:55844 - 58013 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000127189s
	[INFO] 10.244.0.7:42607 - 2875 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069037s
	[INFO] 10.244.0.7:42607 - 63545 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085819s
	[INFO] 10.244.0.7:39438 - 41557 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000073605s
	[INFO] 10.244.0.7:39438 - 9558 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080925s
	[INFO] 10.244.0.22:37660 - 8634 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00040356s
	[INFO] 10.244.0.22:60149 - 48801 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000092823s
	[INFO] 10.244.0.22:51326 - 38513 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151422s
	[INFO] 10.244.0.22:37486 - 14650 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000078711s
	[INFO] 10.244.0.22:37747 - 34950 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076232s
	[INFO] 10.244.0.22:54355 - 16126 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101489s
	[INFO] 10.244.0.22:33461 - 33440 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000618964s
	[INFO] 10.244.0.22:37377 - 29868 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000542874s
	[INFO] 10.244.0.26:38718 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000605525s
	[INFO] 10.244.0.26:39649 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000169829s
	
	
	==> describe nodes <==
	Name:               addons-825243
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-825243
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=addons-825243
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T16_53_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-825243
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 16:53:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-825243
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 16:58:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 16:56:52 +0000   Mon, 19 Aug 2024 16:53:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 16:56:52 +0000   Mon, 19 Aug 2024 16:53:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 16:56:52 +0000   Mon, 19 Aug 2024 16:53:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 16:56:52 +0000   Mon, 19 Aug 2024 16:53:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    addons-825243
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1d1d3a536f146e68e13d5373a247a6a
	  System UUID:                a1d1d3a5-36f1-46e6-8e13-d5373a247a6a
	  Boot ID:                    dc6cf311-c879-4ef5-9873-ffa2a469bfc9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  default                     hello-world-app-55bf9c44b4-pxx9b           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 coredns-6f6b679f8f-g248k                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m4s
	  kube-system                 etcd-addons-825243                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m9s
	  kube-system                 kube-apiserver-addons-825243               250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-controller-manager-addons-825243      200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-proxy-dmfp2                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-scheduler-addons-825243               100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 metrics-server-8988944d9-j2w2h             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m59s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  local-path-storage          local-path-provisioner-86d989889c-jfc4v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 5m2s  kube-proxy       
	  Normal  Starting                 5m9s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m9s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m9s  kubelet          Node addons-825243 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s  kubelet          Node addons-825243 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m9s  kubelet          Node addons-825243 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m8s  kubelet          Node addons-825243 status is now: NodeReady
	  Normal  RegisteredNode           5m5s  node-controller  Node addons-825243 event: Registered Node addons-825243 in Controller
	
	
	==> dmesg <==
	[  +5.230540] kauditd_printk_skb: 131 callbacks suppressed
	[Aug19 16:54] kauditd_printk_skb: 164 callbacks suppressed
	[  +6.880953] kauditd_printk_skb: 36 callbacks suppressed
	[ +16.798661] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.123370] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.512985] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.049745] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.860120] kauditd_printk_skb: 17 callbacks suppressed
	[Aug19 16:55] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.472447] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.104652] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.462638] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.351321] kauditd_printk_skb: 52 callbacks suppressed
	[ +23.897931] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.157966] kauditd_printk_skb: 42 callbacks suppressed
	[Aug19 16:56] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.484918] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.545491] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.827641] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.643515] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.206068] kauditd_printk_skb: 13 callbacks suppressed
	[ +22.706274] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.232374] kauditd_printk_skb: 33 callbacks suppressed
	[Aug19 16:58] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.239037] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b4daf922ea6fc225852bbf19f290e3bcfc08e9c0f44712edb28e0dc60762d501] <==
	{"level":"info","ts":"2024-08-19T16:54:57.635526Z","caller":"traceutil/trace.go:171","msg":"trace[1457434730] transaction","detail":"{read_only:false; response_revision:1017; number_of_response:1; }","duration":"139.265928ms","start":"2024-08-19T16:54:57.496245Z","end":"2024-08-19T16:54:57.635510Z","steps":["trace[1457434730] 'process raft request'  (duration: 114.915852ms)","trace[1457434730] 'compare'  (duration: 23.998666ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T16:55:14.385723Z","caller":"traceutil/trace.go:171","msg":"trace[1684809458] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"186.147214ms","start":"2024-08-19T16:55:14.199560Z","end":"2024-08-19T16:55:14.385707Z","steps":["trace[1684809458] 'process raft request'  (duration: 186.03108ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T16:55:14.669344Z","caller":"traceutil/trace.go:171","msg":"trace[647184081] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"279.125547ms","start":"2024-08-19T16:55:14.390204Z","end":"2024-08-19T16:55:14.669330Z","steps":["trace[647184081] 'process raft request'  (duration: 279.051686ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T16:55:14.669907Z","caller":"traceutil/trace.go:171","msg":"trace[1699870266] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"337.825002ms","start":"2024-08-19T16:55:14.331972Z","end":"2024-08-19T16:55:14.669797Z","steps":["trace[1699870266] 'process raft request'  (duration: 335.221032ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:55:14.672958Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T16:55:14.331954Z","time spent":"340.926963ms","remote":"127.0.0.1:34592","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":779,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-bc57996ff-qc9mh.17ed2f8cd862f785\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-bc57996ff-qc9mh.17ed2f8cd862f785\" value_size:673 lease:6981020788548569968 >> failure:<>"}
	{"level":"info","ts":"2024-08-19T16:55:14.670227Z","caller":"traceutil/trace.go:171","msg":"trace[299941957] linearizableReadLoop","detail":"{readStateIndex:1151; appliedIndex:1150; }","duration":"335.045369ms","start":"2024-08-19T16:55:14.335173Z","end":"2024-08-19T16:55:14.670218Z","steps":["trace[299941957] 'read index received'  (duration: 50.98208ms)","trace[299941957] 'applied index is now lower than readState.Index'  (duration: 284.06229ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T16:55:14.670399Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.200065ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:55:14.673607Z","caller":"traceutil/trace.go:171","msg":"trace[1000591769] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"338.381925ms","start":"2024-08-19T16:55:14.335168Z","end":"2024-08-19T16:55:14.673550Z","steps":["trace[1000591769] 'agreement among raft nodes before linearized reading'  (duration: 335.080646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:55:14.673661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.555564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:55:14.673704Z","caller":"traceutil/trace.go:171","msg":"trace[1890172586] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"284.5965ms","start":"2024-08-19T16:55:14.389101Z","end":"2024-08-19T16:55:14.673697Z","steps":["trace[1890172586] 'agreement among raft nodes before linearized reading'  (duration: 284.549793ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:55:14.673668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T16:55:14.335136Z","time spent":"338.517164ms","remote":"127.0.0.1:34702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-19T16:55:14.673628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.208709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:55:14.676512Z","caller":"traceutil/trace.go:171","msg":"trace[1526936199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"329.091354ms","start":"2024-08-19T16:55:14.347408Z","end":"2024-08-19T16:55:14.676500Z","steps":["trace[1526936199] 'agreement among raft nodes before linearized reading'  (duration: 326.195868ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:55:14.676595Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T16:55:14.347375Z","time spent":"329.206873ms","remote":"127.0.0.1:34702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-08-19T16:55:26.353695Z","caller":"traceutil/trace.go:171","msg":"trace[1154825456] linearizableReadLoop","detail":"{readStateIndex:1228; appliedIndex:1227; }","duration":"103.801056ms","start":"2024-08-19T16:55:26.249873Z","end":"2024-08-19T16:55:26.353675Z","steps":["trace[1154825456] 'read index received'  (duration: 103.6311ms)","trace[1154825456] 'applied index is now lower than readState.Index'  (duration: 169.037µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T16:55:26.353949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.044507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:55:26.354034Z","caller":"traceutil/trace.go:171","msg":"trace[450853085] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1192; }","duration":"104.156937ms","start":"2024-08-19T16:55:26.249867Z","end":"2024-08-19T16:55:26.354024Z","steps":["trace[450853085] 'agreement among raft nodes before linearized reading'  (duration: 104.01276ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T16:56:10.626763Z","caller":"traceutil/trace.go:171","msg":"trace[1015615996] transaction","detail":"{read_only:false; response_revision:1490; number_of_response:1; }","duration":"465.874581ms","start":"2024-08-19T16:56:10.160860Z","end":"2024-08-19T16:56:10.626734Z","steps":["trace[1015615996] 'process raft request'  (duration: 465.723442ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:56:10.627018Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T16:56:10.160847Z","time spent":"466.06939ms","remote":"127.0.0.1:34794","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1454 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-08-19T16:56:21.153997Z","caller":"traceutil/trace.go:171","msg":"trace[1300096641] transaction","detail":"{read_only:false; response_revision:1567; number_of_response:1; }","duration":"283.894989ms","start":"2024-08-19T16:56:20.870081Z","end":"2024-08-19T16:56:21.153976Z","steps":["trace[1300096641] 'process raft request'  (duration: 283.784113ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:56:21.154521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.698046ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:56:21.154559Z","caller":"traceutil/trace.go:171","msg":"trace[901507014] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1567; }","duration":"232.755264ms","start":"2024-08-19T16:56:20.921797Z","end":"2024-08-19T16:56:21.154552Z","steps":["trace[901507014] 'agreement among raft nodes before linearized reading'  (duration: 232.682572ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T16:56:21.154435Z","caller":"traceutil/trace.go:171","msg":"trace[1994802459] linearizableReadLoop","detail":"{readStateIndex:1623; appliedIndex:1622; }","duration":"232.553482ms","start":"2024-08-19T16:56:20.921870Z","end":"2024-08-19T16:56:21.154424Z","steps":["trace[1994802459] 'read index received'  (duration: 231.92339ms)","trace[1994802459] 'applied index is now lower than readState.Index'  (duration: 629.004µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T16:56:21.157510Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.368927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:56:21.157540Z","caller":"traceutil/trace.go:171","msg":"trace[2138731973] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1567; }","duration":"168.41973ms","start":"2024-08-19T16:56:20.989111Z","end":"2024-08-19T16:56:21.157531Z","steps":["trace[2138731973] 'agreement among raft nodes before linearized reading'  (duration: 165.60966ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:58:56 up 5 min,  0 users,  load average: 0.63, 1.09, 0.59
	Linux addons-825243 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d58ad92a674ccd49ed1ac6c6762b72efdc7e1a134037596a9bb9ab6eb77a2c81] <==
	E0819 16:55:48.405119       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.27.224:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.27.224:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.27.224:443: connect: connection refused" logger="UnhandledError"
	E0819 16:55:48.407178       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.27.224:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.27.224:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.27.224:443: connect: connection refused" logger="UnhandledError"
	I0819 16:55:48.460092       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0819 16:56:04.732719       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.161.50"}
	E0819 16:56:15.720734       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.129:8443->10.244.0.29:55860: read: connection reset by peer
	I0819 16:56:22.858892       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 16:56:23.041754       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.253.80"}
	I0819 16:56:25.171123       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 16:56:26.248393       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 16:56:29.050249       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0819 16:56:58.928166       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 16:56:58.928229       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 16:56:58.951102       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 16:56:58.951224       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 16:56:58.974777       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 16:56:58.975097       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 16:56:59.002268       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 16:56:59.002356       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 16:56:59.066615       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 16:56:59.066663       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 16:57:00.002767       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 16:57:00.066957       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 16:57:00.109167       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0819 16:58:46.424206       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.87.0"}
	E0819 16:58:48.703619       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [0e6b65e02148e4a4beb939468541528052b6c74af3ed0680117121fbb8303f48] <==
	I0819 16:57:22.219856       1 shared_informer.go:320] Caches are synced for garbage collector
	W0819 16:57:33.426581       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 16:57:33.426656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 16:57:39.478589       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 16:57:39.478723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 16:57:42.492151       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 16:57:42.492207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 16:57:52.481320       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 16:57:52.481510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 16:58:16.112515       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 16:58:16.112593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 16:58:20.255704       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 16:58:20.255746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 16:58:23.984242       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 16:58:23.984374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 16:58:24.987077       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 16:58:24.987130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 16:58:46.228937       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="30.591845ms"
	I0819 16:58:46.247911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="18.899525ms"
	I0819 16:58:46.248057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.935µs"
	I0819 16:58:48.603254       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0819 16:58:48.612511       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0819 16:58:48.617162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="3.129µs"
	I0819 16:58:50.024071       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.319462ms"
	I0819 16:58:50.024159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.062µs"
	
	
	==> kube-proxy [a93ec25eebd6003b585e9fd7a83f22315b4628b439ea0750f1748f6f854225c9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 16:53:54.470978       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 16:53:54.481797       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	E0819 16:53:54.481892       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 16:53:54.537240       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 16:53:54.537269       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 16:53:54.537322       1 server_linux.go:169] "Using iptables Proxier"
	I0819 16:53:54.540538       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 16:53:54.540797       1 server.go:483] "Version info" version="v1.31.0"
	I0819 16:53:54.544908       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 16:53:54.551708       1 config.go:104] "Starting endpoint slice config controller"
	I0819 16:53:54.551770       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 16:53:54.551835       1 config.go:197] "Starting service config controller"
	I0819 16:53:54.551840       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 16:53:54.551893       1 config.go:326] "Starting node config controller"
	I0819 16:53:54.551919       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 16:53:54.653863       1 shared_informer.go:320] Caches are synced for service config
	I0819 16:53:54.653923       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 16:53:54.654444       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [59baf8452639b8bfefbbb03c5b991b5e1e2045846e7cf24bf6295732f0e5c7ad] <==
	W0819 16:53:44.692748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 16:53:44.694121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.502564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 16:53:45.502611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.582124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 16:53:45.582188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.634449       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 16:53:45.634604       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 16:53:45.637722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 16:53:45.637764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.647124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 16:53:45.647224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.692269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 16:53:45.692417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.747401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 16:53:45.747449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.813296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 16:53:45.813406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.901673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 16:53:45.901781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.942301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 16:53:45.942573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.943532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 16:53:45.943614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 16:53:47.382755       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 16:58:47 addons-825243 kubelet[1223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 16:58:47 addons-825243 kubelet[1223]: I0819 16:58:47.363926    1223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhkm8\" (UniqueName: \"kubernetes.io/projected/4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c-kube-api-access-vhkm8\") pod \"4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c\" (UID: \"4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c\") "
	Aug 19 16:58:47 addons-825243 kubelet[1223]: I0819 16:58:47.365881    1223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c-kube-api-access-vhkm8" (OuterVolumeSpecName: "kube-api-access-vhkm8") pod "4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c" (UID: "4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c"). InnerVolumeSpecName "kube-api-access-vhkm8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 16:58:47 addons-825243 kubelet[1223]: I0819 16:58:47.465235    1223 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vhkm8\" (UniqueName: \"kubernetes.io/projected/4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c-kube-api-access-vhkm8\") on node \"addons-825243\" DevicePath \"\""
	Aug 19 16:58:47 addons-825243 kubelet[1223]: E0819 16:58:47.628630    1223 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086727628124934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585116,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 16:58:47 addons-825243 kubelet[1223]: E0819 16:58:47.628655    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086727628124934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585116,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 16:58:47 addons-825243 kubelet[1223]: I0819 16:58:47.979651    1223 scope.go:117] "RemoveContainer" containerID="f19f6c4b2f51ff4ba5307af68dc30acdade5479b3a5820f1f3639fee325af196"
	Aug 19 16:58:48 addons-825243 kubelet[1223]: I0819 16:58:48.008315    1223 scope.go:117] "RemoveContainer" containerID="f19f6c4b2f51ff4ba5307af68dc30acdade5479b3a5820f1f3639fee325af196"
	Aug 19 16:58:48 addons-825243 kubelet[1223]: E0819 16:58:48.010764    1223 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f19f6c4b2f51ff4ba5307af68dc30acdade5479b3a5820f1f3639fee325af196\": container with ID starting with f19f6c4b2f51ff4ba5307af68dc30acdade5479b3a5820f1f3639fee325af196 not found: ID does not exist" containerID="f19f6c4b2f51ff4ba5307af68dc30acdade5479b3a5820f1f3639fee325af196"
	Aug 19 16:58:48 addons-825243 kubelet[1223]: I0819 16:58:48.010848    1223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f19f6c4b2f51ff4ba5307af68dc30acdade5479b3a5820f1f3639fee325af196"} err="failed to get container status \"f19f6c4b2f51ff4ba5307af68dc30acdade5479b3a5820f1f3639fee325af196\": rpc error: code = NotFound desc = could not find container \"f19f6c4b2f51ff4ba5307af68dc30acdade5479b3a5820f1f3639fee325af196\": container with ID starting with f19f6c4b2f51ff4ba5307af68dc30acdade5479b3a5820f1f3639fee325af196 not found: ID does not exist"
	Aug 19 16:58:49 addons-825243 kubelet[1223]: I0819 16:58:49.294353    1223 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c" path="/var/lib/kubelet/pods/4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c/volumes"
	Aug 19 16:58:49 addons-825243 kubelet[1223]: I0819 16:58:49.294861    1223 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b514a8-0371-46f6-82fb-6413d9fd797f" path="/var/lib/kubelet/pods/c9b514a8-0371-46f6-82fb-6413d9fd797f/volumes"
	Aug 19 16:58:49 addons-825243 kubelet[1223]: I0819 16:58:49.295253    1223 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddf40bd3-9401-4adf-b1e4-89534f5cabef" path="/var/lib/kubelet/pods/ddf40bd3-9401-4adf-b1e4-89534f5cabef/volumes"
	Aug 19 16:58:50 addons-825243 kubelet[1223]: I0819 16:58:50.012128    1223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-pxx9b" podStartSLOduration=1.763835391 podStartE2EDuration="4.012100161s" podCreationTimestamp="2024-08-19 16:58:46 +0000 UTC" firstStartedPulling="2024-08-19 16:58:46.774969099 +0000 UTC m=+299.587752467" lastFinishedPulling="2024-08-19 16:58:49.023233878 +0000 UTC m=+301.836017237" observedRunningTime="2024-08-19 16:58:50.011899945 +0000 UTC m=+302.824683321" watchObservedRunningTime="2024-08-19 16:58:50.012100161 +0000 UTC m=+302.824883536"
	Aug 19 16:58:51 addons-825243 kubelet[1223]: I0819 16:58:51.992071    1223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wssjl\" (UniqueName: \"kubernetes.io/projected/153c5ff0-a742-495d-a2a4-2df729a73025-kube-api-access-wssjl\") pod \"153c5ff0-a742-495d-a2a4-2df729a73025\" (UID: \"153c5ff0-a742-495d-a2a4-2df729a73025\") "
	Aug 19 16:58:51 addons-825243 kubelet[1223]: I0819 16:58:51.992129    1223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/153c5ff0-a742-495d-a2a4-2df729a73025-webhook-cert\") pod \"153c5ff0-a742-495d-a2a4-2df729a73025\" (UID: \"153c5ff0-a742-495d-a2a4-2df729a73025\") "
	Aug 19 16:58:51 addons-825243 kubelet[1223]: I0819 16:58:51.995190    1223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153c5ff0-a742-495d-a2a4-2df729a73025-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "153c5ff0-a742-495d-a2a4-2df729a73025" (UID: "153c5ff0-a742-495d-a2a4-2df729a73025"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 19 16:58:51 addons-825243 kubelet[1223]: I0819 16:58:51.996235    1223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153c5ff0-a742-495d-a2a4-2df729a73025-kube-api-access-wssjl" (OuterVolumeSpecName: "kube-api-access-wssjl") pod "153c5ff0-a742-495d-a2a4-2df729a73025" (UID: "153c5ff0-a742-495d-a2a4-2df729a73025"). InnerVolumeSpecName "kube-api-access-wssjl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 16:58:52 addons-825243 kubelet[1223]: I0819 16:58:52.011504    1223 scope.go:117] "RemoveContainer" containerID="b1975a065b65c35173129910954f482628bb3590db66c3d5018c19e2fc62a05f"
	Aug 19 16:58:52 addons-825243 kubelet[1223]: I0819 16:58:52.030996    1223 scope.go:117] "RemoveContainer" containerID="b1975a065b65c35173129910954f482628bb3590db66c3d5018c19e2fc62a05f"
	Aug 19 16:58:52 addons-825243 kubelet[1223]: E0819 16:58:52.031512    1223 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1975a065b65c35173129910954f482628bb3590db66c3d5018c19e2fc62a05f\": container with ID starting with b1975a065b65c35173129910954f482628bb3590db66c3d5018c19e2fc62a05f not found: ID does not exist" containerID="b1975a065b65c35173129910954f482628bb3590db66c3d5018c19e2fc62a05f"
	Aug 19 16:58:52 addons-825243 kubelet[1223]: I0819 16:58:52.031547    1223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1975a065b65c35173129910954f482628bb3590db66c3d5018c19e2fc62a05f"} err="failed to get container status \"b1975a065b65c35173129910954f482628bb3590db66c3d5018c19e2fc62a05f\": rpc error: code = NotFound desc = could not find container \"b1975a065b65c35173129910954f482628bb3590db66c3d5018c19e2fc62a05f\": container with ID starting with b1975a065b65c35173129910954f482628bb3590db66c3d5018c19e2fc62a05f not found: ID does not exist"
	Aug 19 16:58:52 addons-825243 kubelet[1223]: I0819 16:58:52.093006    1223 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/153c5ff0-a742-495d-a2a4-2df729a73025-webhook-cert\") on node \"addons-825243\" DevicePath \"\""
	Aug 19 16:58:52 addons-825243 kubelet[1223]: I0819 16:58:52.093034    1223 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wssjl\" (UniqueName: \"kubernetes.io/projected/153c5ff0-a742-495d-a2a4-2df729a73025-kube-api-access-wssjl\") on node \"addons-825243\" DevicePath \"\""
	Aug 19 16:58:53 addons-825243 kubelet[1223]: I0819 16:58:53.295130    1223 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153c5ff0-a742-495d-a2a4-2df729a73025" path="/var/lib/kubelet/pods/153c5ff0-a742-495d-a2a4-2df729a73025/volumes"
	
	
	==> storage-provisioner [6c2450e2dc00595f2c4df9fdc3ac7142bf1da96bfe8897e1e713f62ad78811ce] <==
	I0819 16:53:59.069065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 16:53:59.114083       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 16:53:59.126138       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 16:53:59.201475       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 16:53:59.201636       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-825243_200d2208-1fb1-4eb9-92c3-f32d08f0589d!
	I0819 16:53:59.202585       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd5f78eb-9430-4ee8-b358-eeaf905abaa0", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-825243_200d2208-1fb1-4eb9-92c3-f32d08f0589d became leader
	I0819 16:53:59.402361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-825243_200d2208-1fb1-4eb9-92c3-f32d08f0589d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-825243 -n addons-825243
helpers_test.go:261: (dbg) Run:  kubectl --context addons-825243 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (357.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.741701ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-j2w2h" [ba217649-2efe-4c98-8076-d73d63794bd7] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006540606s
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (62.143051ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 2m17.789828217s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (71.435376ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 2m20.661753947s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (80.935097ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 2m27.396580707s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (67.386925ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 2m36.107687886s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (62.239312ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 2m46.281690176s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (61.237314ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 3m1.764794016s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (60.029665ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 3m29.145344045s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (60.813498ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 3m54.415560621s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (64.402039ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 4m51.481812565s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (62.230103ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 6m0.376513133s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (60.27883ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 6m53.549525402s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (60.44373ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 7m27.638921026s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-825243 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-825243 top pods -n kube-system: exit status 1 (61.634154ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-g248k, age: 8m7.031079873s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-825243 -n addons-825243
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-825243 logs -n 25: (1.205849545s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-444293                                                                     | download-only-444293 | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC | 19 Aug 24 16:53 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-174718 | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC |                     |
	|         | binary-mirror-174718                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38627                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-174718                                                                     | binary-mirror-174718 | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC | 19 Aug 24 16:53 UTC |
	| addons  | disable dashboard -p                                                                        | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC |                     |
	|         | addons-825243                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC |                     |
	|         | addons-825243                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-825243 --wait=true                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:53 UTC | 19 Aug 24 16:55 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:55 UTC | 19 Aug 24 16:55 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:55 UTC | 19 Aug 24 16:55 UTC |
	|         | addons-825243                                                                               |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:55 UTC | 19 Aug 24 16:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-825243 ssh cat                                                                       | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | /opt/local-path-provisioner/pvc-63640194-31bc-4782-b58f-2706becef52c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | -p addons-825243                                                                            |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | -p addons-825243                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-825243 ip                                                                            | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | addons-825243                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-825243 ssh curl -s                                                                   | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-825243 addons                                                                        | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-825243 addons                                                                        | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:56 UTC | 19 Aug 24 16:56 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-825243 ip                                                                            | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:58 UTC | 19 Aug 24 16:58 UTC |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:58 UTC | 19 Aug 24 16:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825243 addons disable                                                                | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 16:58 UTC | 19 Aug 24 16:58 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-825243 addons                                                                        | addons-825243        | jenkins | v1.33.1 | 19 Aug 24 17:01 UTC | 19 Aug 24 17:01 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 16:53:08
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 16:53:08.536296   18587 out.go:345] Setting OutFile to fd 1 ...
	I0819 16:53:08.536789   18587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 16:53:08.536838   18587 out.go:358] Setting ErrFile to fd 2...
	I0819 16:53:08.536856   18587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 16:53:08.537294   18587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 16:53:08.538286   18587 out.go:352] Setting JSON to false
	I0819 16:53:08.539076   18587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2134,"bootTime":1724084255,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 16:53:08.539132   18587 start.go:139] virtualization: kvm guest
	I0819 16:53:08.541156   18587 out.go:177] * [addons-825243] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 16:53:08.542654   18587 notify.go:220] Checking for updates...
	I0819 16:53:08.542667   18587 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 16:53:08.544423   18587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 16:53:08.545926   18587 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 16:53:08.547474   18587 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 16:53:08.548867   18587 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 16:53:08.550252   18587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 16:53:08.551826   18587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 16:53:08.583528   18587 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 16:53:08.584876   18587 start.go:297] selected driver: kvm2
	I0819 16:53:08.584890   18587 start.go:901] validating driver "kvm2" against <nil>
	I0819 16:53:08.584901   18587 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 16:53:08.585621   18587 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 16:53:08.585692   18587 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 16:53:08.600403   18587 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 16:53:08.600460   18587 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 16:53:08.600683   18587 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 16:53:08.600745   18587 cni.go:84] Creating CNI manager for ""
	I0819 16:53:08.600782   18587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 16:53:08.600797   18587 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 16:53:08.600856   18587 start.go:340] cluster config:
	{Name:addons-825243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-825243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 16:53:08.600954   18587 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 16:53:08.602805   18587 out.go:177] * Starting "addons-825243" primary control-plane node in "addons-825243" cluster
	I0819 16:53:08.604274   18587 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 16:53:08.604319   18587 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 16:53:08.604340   18587 cache.go:56] Caching tarball of preloaded images
	I0819 16:53:08.604433   18587 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 16:53:08.604448   18587 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 16:53:08.604737   18587 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/config.json ...
	I0819 16:53:08.604778   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/config.json: {Name:mk03102e743c14e50e5d12b93edfed098d134cb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:08.604954   18587 start.go:360] acquireMachinesLock for addons-825243: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 16:53:08.605016   18587 start.go:364] duration metric: took 44.552µs to acquireMachinesLock for "addons-825243"
	I0819 16:53:08.605043   18587 start.go:93] Provisioning new machine with config: &{Name:addons-825243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-825243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 16:53:08.605108   18587 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 16:53:08.606990   18587 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 16:53:08.607139   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:08.607181   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:08.621417   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0819 16:53:08.621808   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:08.622285   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:08.622327   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:08.622647   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:08.622817   18587 main.go:141] libmachine: (addons-825243) Calling .GetMachineName
	I0819 16:53:08.622946   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:08.623071   18587 start.go:159] libmachine.API.Create for "addons-825243" (driver="kvm2")
	I0819 16:53:08.623093   18587 client.go:168] LocalClient.Create starting
	I0819 16:53:08.623126   18587 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 16:53:08.673646   18587 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 16:53:08.846596   18587 main.go:141] libmachine: Running pre-create checks...
	I0819 16:53:08.846620   18587 main.go:141] libmachine: (addons-825243) Calling .PreCreateCheck
	I0819 16:53:08.847145   18587 main.go:141] libmachine: (addons-825243) Calling .GetConfigRaw
	I0819 16:53:08.847601   18587 main.go:141] libmachine: Creating machine...
	I0819 16:53:08.847615   18587 main.go:141] libmachine: (addons-825243) Calling .Create
	I0819 16:53:08.847768   18587 main.go:141] libmachine: (addons-825243) Creating KVM machine...
	I0819 16:53:08.849062   18587 main.go:141] libmachine: (addons-825243) DBG | found existing default KVM network
	I0819 16:53:08.849703   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:08.849558   18609 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0819 16:53:08.849725   18587 main.go:141] libmachine: (addons-825243) DBG | created network xml: 
	I0819 16:53:08.849738   18587 main.go:141] libmachine: (addons-825243) DBG | <network>
	I0819 16:53:08.849749   18587 main.go:141] libmachine: (addons-825243) DBG |   <name>mk-addons-825243</name>
	I0819 16:53:08.849760   18587 main.go:141] libmachine: (addons-825243) DBG |   <dns enable='no'/>
	I0819 16:53:08.849772   18587 main.go:141] libmachine: (addons-825243) DBG |   
	I0819 16:53:08.849783   18587 main.go:141] libmachine: (addons-825243) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 16:53:08.849790   18587 main.go:141] libmachine: (addons-825243) DBG |     <dhcp>
	I0819 16:53:08.849845   18587 main.go:141] libmachine: (addons-825243) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 16:53:08.849867   18587 main.go:141] libmachine: (addons-825243) DBG |     </dhcp>
	I0819 16:53:08.849875   18587 main.go:141] libmachine: (addons-825243) DBG |   </ip>
	I0819 16:53:08.849884   18587 main.go:141] libmachine: (addons-825243) DBG |   
	I0819 16:53:08.849892   18587 main.go:141] libmachine: (addons-825243) DBG | </network>
	I0819 16:53:08.849900   18587 main.go:141] libmachine: (addons-825243) DBG | 
	I0819 16:53:08.855642   18587 main.go:141] libmachine: (addons-825243) DBG | trying to create private KVM network mk-addons-825243 192.168.39.0/24...
	I0819 16:53:08.921593   18587 main.go:141] libmachine: (addons-825243) DBG | private KVM network mk-addons-825243 192.168.39.0/24 created
	I0819 16:53:08.921629   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:08.921527   18609 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 16:53:08.921644   18587 main.go:141] libmachine: (addons-825243) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243 ...
	I0819 16:53:08.921659   18587 main.go:141] libmachine: (addons-825243) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 16:53:08.921671   18587 main.go:141] libmachine: (addons-825243) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 16:53:09.207395   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:09.207287   18609 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa...
	I0819 16:53:09.483143   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:09.483023   18609 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/addons-825243.rawdisk...
	I0819 16:53:09.483180   18587 main.go:141] libmachine: (addons-825243) DBG | Writing magic tar header
	I0819 16:53:09.483190   18587 main.go:141] libmachine: (addons-825243) DBG | Writing SSH key tar header
	I0819 16:53:09.483198   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:09.483133   18609 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243 ...
	I0819 16:53:09.483297   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243 (perms=drwx------)
	I0819 16:53:09.483320   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243
	I0819 16:53:09.483328   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 16:53:09.483335   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 16:53:09.483342   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 16:53:09.483352   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 16:53:09.483358   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 16:53:09.483365   18587 main.go:141] libmachine: (addons-825243) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 16:53:09.483369   18587 main.go:141] libmachine: (addons-825243) Creating domain...
	I0819 16:53:09.483378   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 16:53:09.483385   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 16:53:09.483393   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 16:53:09.483398   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home/jenkins
	I0819 16:53:09.483406   18587 main.go:141] libmachine: (addons-825243) DBG | Checking permissions on dir: /home
	I0819 16:53:09.483415   18587 main.go:141] libmachine: (addons-825243) DBG | Skipping /home - not owner
	I0819 16:53:09.484369   18587 main.go:141] libmachine: (addons-825243) define libvirt domain using xml: 
	I0819 16:53:09.484388   18587 main.go:141] libmachine: (addons-825243) <domain type='kvm'>
	I0819 16:53:09.484399   18587 main.go:141] libmachine: (addons-825243)   <name>addons-825243</name>
	I0819 16:53:09.484408   18587 main.go:141] libmachine: (addons-825243)   <memory unit='MiB'>4000</memory>
	I0819 16:53:09.484416   18587 main.go:141] libmachine: (addons-825243)   <vcpu>2</vcpu>
	I0819 16:53:09.484429   18587 main.go:141] libmachine: (addons-825243)   <features>
	I0819 16:53:09.484441   18587 main.go:141] libmachine: (addons-825243)     <acpi/>
	I0819 16:53:09.484447   18587 main.go:141] libmachine: (addons-825243)     <apic/>
	I0819 16:53:09.484457   18587 main.go:141] libmachine: (addons-825243)     <pae/>
	I0819 16:53:09.484461   18587 main.go:141] libmachine: (addons-825243)     
	I0819 16:53:09.484487   18587 main.go:141] libmachine: (addons-825243)   </features>
	I0819 16:53:09.484515   18587 main.go:141] libmachine: (addons-825243)   <cpu mode='host-passthrough'>
	I0819 16:53:09.484523   18587 main.go:141] libmachine: (addons-825243)   
	I0819 16:53:09.484540   18587 main.go:141] libmachine: (addons-825243)   </cpu>
	I0819 16:53:09.484546   18587 main.go:141] libmachine: (addons-825243)   <os>
	I0819 16:53:09.484550   18587 main.go:141] libmachine: (addons-825243)     <type>hvm</type>
	I0819 16:53:09.484555   18587 main.go:141] libmachine: (addons-825243)     <boot dev='cdrom'/>
	I0819 16:53:09.484566   18587 main.go:141] libmachine: (addons-825243)     <boot dev='hd'/>
	I0819 16:53:09.484576   18587 main.go:141] libmachine: (addons-825243)     <bootmenu enable='no'/>
	I0819 16:53:09.484584   18587 main.go:141] libmachine: (addons-825243)   </os>
	I0819 16:53:09.484591   18587 main.go:141] libmachine: (addons-825243)   <devices>
	I0819 16:53:09.484599   18587 main.go:141] libmachine: (addons-825243)     <disk type='file' device='cdrom'>
	I0819 16:53:09.484607   18587 main.go:141] libmachine: (addons-825243)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/boot2docker.iso'/>
	I0819 16:53:09.484613   18587 main.go:141] libmachine: (addons-825243)       <target dev='hdc' bus='scsi'/>
	I0819 16:53:09.484620   18587 main.go:141] libmachine: (addons-825243)       <readonly/>
	I0819 16:53:09.484624   18587 main.go:141] libmachine: (addons-825243)     </disk>
	I0819 16:53:09.484630   18587 main.go:141] libmachine: (addons-825243)     <disk type='file' device='disk'>
	I0819 16:53:09.484640   18587 main.go:141] libmachine: (addons-825243)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 16:53:09.484649   18587 main.go:141] libmachine: (addons-825243)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/addons-825243.rawdisk'/>
	I0819 16:53:09.484659   18587 main.go:141] libmachine: (addons-825243)       <target dev='hda' bus='virtio'/>
	I0819 16:53:09.484664   18587 main.go:141] libmachine: (addons-825243)     </disk>
	I0819 16:53:09.484669   18587 main.go:141] libmachine: (addons-825243)     <interface type='network'>
	I0819 16:53:09.484677   18587 main.go:141] libmachine: (addons-825243)       <source network='mk-addons-825243'/>
	I0819 16:53:09.484681   18587 main.go:141] libmachine: (addons-825243)       <model type='virtio'/>
	I0819 16:53:09.484686   18587 main.go:141] libmachine: (addons-825243)     </interface>
	I0819 16:53:09.484693   18587 main.go:141] libmachine: (addons-825243)     <interface type='network'>
	I0819 16:53:09.484699   18587 main.go:141] libmachine: (addons-825243)       <source network='default'/>
	I0819 16:53:09.484706   18587 main.go:141] libmachine: (addons-825243)       <model type='virtio'/>
	I0819 16:53:09.484711   18587 main.go:141] libmachine: (addons-825243)     </interface>
	I0819 16:53:09.484718   18587 main.go:141] libmachine: (addons-825243)     <serial type='pty'>
	I0819 16:53:09.484731   18587 main.go:141] libmachine: (addons-825243)       <target port='0'/>
	I0819 16:53:09.484742   18587 main.go:141] libmachine: (addons-825243)     </serial>
	I0819 16:53:09.484769   18587 main.go:141] libmachine: (addons-825243)     <console type='pty'>
	I0819 16:53:09.484785   18587 main.go:141] libmachine: (addons-825243)       <target type='serial' port='0'/>
	I0819 16:53:09.484795   18587 main.go:141] libmachine: (addons-825243)     </console>
	I0819 16:53:09.484801   18587 main.go:141] libmachine: (addons-825243)     <rng model='virtio'>
	I0819 16:53:09.484821   18587 main.go:141] libmachine: (addons-825243)       <backend model='random'>/dev/random</backend>
	I0819 16:53:09.484839   18587 main.go:141] libmachine: (addons-825243)     </rng>
	I0819 16:53:09.484851   18587 main.go:141] libmachine: (addons-825243)     
	I0819 16:53:09.484861   18587 main.go:141] libmachine: (addons-825243)     
	I0819 16:53:09.484869   18587 main.go:141] libmachine: (addons-825243)   </devices>
	I0819 16:53:09.484878   18587 main.go:141] libmachine: (addons-825243) </domain>
	I0819 16:53:09.484889   18587 main.go:141] libmachine: (addons-825243) 
	I0819 16:53:09.491278   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:84:2c:54 in network default
	I0819 16:53:09.491999   18587 main.go:141] libmachine: (addons-825243) Ensuring networks are active...
	I0819 16:53:09.492018   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:09.492877   18587 main.go:141] libmachine: (addons-825243) Ensuring network default is active
	I0819 16:53:09.493203   18587 main.go:141] libmachine: (addons-825243) Ensuring network mk-addons-825243 is active
	I0819 16:53:09.494654   18587 main.go:141] libmachine: (addons-825243) Getting domain xml...
	I0819 16:53:09.495463   18587 main.go:141] libmachine: (addons-825243) Creating domain...
	I0819 16:53:11.125700   18587 main.go:141] libmachine: (addons-825243) Waiting to get IP...
	I0819 16:53:11.126626   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:11.127108   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:11.127161   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:11.127109   18609 retry.go:31] will retry after 284.983674ms: waiting for machine to come up
	I0819 16:53:11.413634   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:11.413967   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:11.413993   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:11.413910   18609 retry.go:31] will retry after 285.340726ms: waiting for machine to come up
	I0819 16:53:11.700258   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:11.700811   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:11.700836   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:11.700645   18609 retry.go:31] will retry after 472.018783ms: waiting for machine to come up
	I0819 16:53:12.173955   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:12.174450   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:12.174504   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:12.174413   18609 retry.go:31] will retry after 529.719767ms: waiting for machine to come up
	I0819 16:53:12.706375   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:12.706817   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:12.706845   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:12.706759   18609 retry.go:31] will retry after 634.102418ms: waiting for machine to come up
	I0819 16:53:13.342676   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:13.343033   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:13.343060   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:13.342986   18609 retry.go:31] will retry after 691.330212ms: waiting for machine to come up
	I0819 16:53:14.035619   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:14.035976   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:14.035999   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:14.035930   18609 retry.go:31] will retry after 876.541685ms: waiting for machine to come up
	I0819 16:53:14.913784   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:14.914194   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:14.914217   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:14.914150   18609 retry.go:31] will retry after 1.483212916s: waiting for machine to come up
	I0819 16:53:16.399732   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:16.400330   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:16.400355   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:16.400224   18609 retry.go:31] will retry after 1.267260439s: waiting for machine to come up
	I0819 16:53:17.669612   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:17.669991   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:17.670034   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:17.669944   18609 retry.go:31] will retry after 2.227693563s: waiting for machine to come up
	I0819 16:53:19.899042   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:19.899473   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:19.899505   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:19.899397   18609 retry.go:31] will retry after 2.167227329s: waiting for machine to come up
	I0819 16:53:22.069710   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:22.070126   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:22.070155   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:22.070059   18609 retry.go:31] will retry after 3.431382951s: waiting for machine to come up
	I0819 16:53:25.504118   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:25.504523   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find current IP address of domain addons-825243 in network mk-addons-825243
	I0819 16:53:25.504542   18587 main.go:141] libmachine: (addons-825243) DBG | I0819 16:53:25.504478   18609 retry.go:31] will retry after 4.43401048s: waiting for machine to come up
	I0819 16:53:29.939874   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:29.940324   18587 main.go:141] libmachine: (addons-825243) Found IP for machine: 192.168.39.129
	I0819 16:53:29.940358   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has current primary IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:29.940367   18587 main.go:141] libmachine: (addons-825243) Reserving static IP address...
	I0819 16:53:29.940787   18587 main.go:141] libmachine: (addons-825243) DBG | unable to find host DHCP lease matching {name: "addons-825243", mac: "52:54:00:fc:11:a2", ip: "192.168.39.129"} in network mk-addons-825243
	I0819 16:53:30.012100   18587 main.go:141] libmachine: (addons-825243) DBG | Getting to WaitForSSH function...
	I0819 16:53:30.012122   18587 main.go:141] libmachine: (addons-825243) Reserved static IP address: 192.168.39.129
	I0819 16:53:30.012134   18587 main.go:141] libmachine: (addons-825243) Waiting for SSH to be available...
	I0819 16:53:30.014643   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.015032   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.015077   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.015151   18587 main.go:141] libmachine: (addons-825243) DBG | Using SSH client type: external
	I0819 16:53:30.015204   18587 main.go:141] libmachine: (addons-825243) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa (-rw-------)
	I0819 16:53:30.015260   18587 main.go:141] libmachine: (addons-825243) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 16:53:30.015274   18587 main.go:141] libmachine: (addons-825243) DBG | About to run SSH command:
	I0819 16:53:30.015284   18587 main.go:141] libmachine: (addons-825243) DBG | exit 0
	I0819 16:53:30.148598   18587 main.go:141] libmachine: (addons-825243) DBG | SSH cmd err, output: <nil>: 
	I0819 16:53:30.148907   18587 main.go:141] libmachine: (addons-825243) KVM machine creation complete!
	I0819 16:53:30.149170   18587 main.go:141] libmachine: (addons-825243) Calling .GetConfigRaw
	I0819 16:53:30.149722   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:30.149875   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:30.150020   18587 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 16:53:30.150033   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:30.151330   18587 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 16:53:30.151344   18587 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 16:53:30.151351   18587 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 16:53:30.151357   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.153512   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.153837   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.153867   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.154001   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.154154   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.154301   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.154447   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.154571   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:30.154773   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:30.154786   18587 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 16:53:30.263931   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 16:53:30.263961   18587 main.go:141] libmachine: Detecting the provisioner...
	I0819 16:53:30.263972   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.266534   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.266902   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.266943   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.267092   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.267288   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.267445   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.267568   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.267721   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:30.267912   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:30.267926   18587 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 16:53:30.377151   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 16:53:30.377213   18587 main.go:141] libmachine: found compatible host: buildroot
	I0819 16:53:30.377222   18587 main.go:141] libmachine: Provisioning with buildroot...
	I0819 16:53:30.377244   18587 main.go:141] libmachine: (addons-825243) Calling .GetMachineName
	I0819 16:53:30.377515   18587 buildroot.go:166] provisioning hostname "addons-825243"
	I0819 16:53:30.377549   18587 main.go:141] libmachine: (addons-825243) Calling .GetMachineName
	I0819 16:53:30.377769   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.380025   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.380306   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.380357   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.380466   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.380711   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.380900   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.381047   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.381200   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:30.381414   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:30.381432   18587 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-825243 && echo "addons-825243" | sudo tee /etc/hostname
	I0819 16:53:30.501817   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-825243
	
	I0819 16:53:30.501840   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.504705   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.505133   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.505165   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.505318   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.505568   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.505744   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.505877   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.506011   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:30.506177   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:30.506192   18587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-825243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-825243/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-825243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 16:53:30.620583   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 16:53:30.620614   18587 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 16:53:30.620634   18587 buildroot.go:174] setting up certificates
	I0819 16:53:30.620644   18587 provision.go:84] configureAuth start
	I0819 16:53:30.620653   18587 main.go:141] libmachine: (addons-825243) Calling .GetMachineName
	I0819 16:53:30.620933   18587 main.go:141] libmachine: (addons-825243) Calling .GetIP
	I0819 16:53:30.623515   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.623848   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.623874   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.624044   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.626076   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.626376   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.626403   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.626523   18587 provision.go:143] copyHostCerts
	I0819 16:53:30.626595   18587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 16:53:30.626776   18587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 16:53:30.626872   18587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 16:53:30.626963   18587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.addons-825243 san=[127.0.0.1 192.168.39.129 addons-825243 localhost minikube]
	I0819 16:53:30.799091   18587 provision.go:177] copyRemoteCerts
	I0819 16:53:30.799142   18587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 16:53:30.799163   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.801644   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.801991   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.802019   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.802197   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.802450   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.802594   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.802753   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:30.887264   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 16:53:30.909649   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 16:53:30.930958   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 16:53:30.952658   18587 provision.go:87] duration metric: took 332.001257ms to configureAuth
	I0819 16:53:30.952688   18587 buildroot.go:189] setting minikube options for container-runtime
	I0819 16:53:30.952932   18587 config.go:182] Loaded profile config "addons-825243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 16:53:30.953077   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:30.955645   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.956015   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:30.956044   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:30.956304   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:30.956511   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.956709   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:30.956889   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:30.957023   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:30.957198   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:30.957214   18587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 16:53:31.221019   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 16:53:31.221058   18587 main.go:141] libmachine: Checking connection to Docker...
	I0819 16:53:31.221072   18587 main.go:141] libmachine: (addons-825243) Calling .GetURL
	I0819 16:53:31.222369   18587 main.go:141] libmachine: (addons-825243) DBG | Using libvirt version 6000000
	I0819 16:53:31.224344   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.224705   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.224733   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.224936   18587 main.go:141] libmachine: Docker is up and running!
	I0819 16:53:31.224952   18587 main.go:141] libmachine: Reticulating splines...
	I0819 16:53:31.224958   18587 client.go:171] duration metric: took 22.601858712s to LocalClient.Create
	I0819 16:53:31.224976   18587 start.go:167] duration metric: took 22.601906283s to libmachine.API.Create "addons-825243"
	I0819 16:53:31.224985   18587 start.go:293] postStartSetup for "addons-825243" (driver="kvm2")
	I0819 16:53:31.224994   18587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 16:53:31.225010   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:31.225251   18587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 16:53:31.225274   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:31.227188   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.227580   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.227608   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.227681   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:31.227854   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:31.228046   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:31.228195   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:31.311059   18587 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 16:53:31.315003   18587 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 16:53:31.315030   18587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 16:53:31.315108   18587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 16:53:31.315137   18587 start.go:296] duration metric: took 90.14732ms for postStartSetup
	I0819 16:53:31.315191   18587 main.go:141] libmachine: (addons-825243) Calling .GetConfigRaw
	I0819 16:53:31.315786   18587 main.go:141] libmachine: (addons-825243) Calling .GetIP
	I0819 16:53:31.318474   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.318800   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.318827   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.319090   18587 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/config.json ...
	I0819 16:53:31.319347   18587 start.go:128] duration metric: took 22.714227457s to createHost
	I0819 16:53:31.319376   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:31.321718   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.322089   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.322118   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.322231   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:31.322416   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:31.322606   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:31.322759   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:31.322967   18587 main.go:141] libmachine: Using SSH client type: native
	I0819 16:53:31.323144   18587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0819 16:53:31.323157   18587 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 16:53:31.433180   18587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724086411.406651672
	
	I0819 16:53:31.433208   18587 fix.go:216] guest clock: 1724086411.406651672
	I0819 16:53:31.433219   18587 fix.go:229] Guest: 2024-08-19 16:53:31.406651672 +0000 UTC Remote: 2024-08-19 16:53:31.319362036 +0000 UTC m=+22.815660156 (delta=87.289636ms)
	I0819 16:53:31.433249   18587 fix.go:200] guest clock delta is within tolerance: 87.289636ms
	I0819 16:53:31.433259   18587 start.go:83] releasing machines lock for "addons-825243", held for 22.828227323s
	I0819 16:53:31.433293   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:31.433566   18587 main.go:141] libmachine: (addons-825243) Calling .GetIP
	I0819 16:53:31.436318   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.436675   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.436702   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.436825   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:31.437298   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:31.437516   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:31.437596   18587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 16:53:31.437656   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:31.437718   18587 ssh_runner.go:195] Run: cat /version.json
	I0819 16:53:31.437741   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:31.440062   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.440353   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.440391   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.440410   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.440489   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:31.440636   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:31.440793   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:31.440894   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:31.440915   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:31.440943   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:31.441080   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:31.441278   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:31.441449   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:31.441586   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:31.574797   18587 ssh_runner.go:195] Run: systemctl --version
	I0819 16:53:31.580489   18587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 16:53:31.732117   18587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 16:53:31.737971   18587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 16:53:31.738025   18587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 16:53:31.752301   18587 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 16:53:31.752321   18587 start.go:495] detecting cgroup driver to use...
	I0819 16:53:31.752377   18587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 16:53:31.768727   18587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 16:53:31.782325   18587 docker.go:217] disabling cri-docker service (if available) ...
	I0819 16:53:31.782385   18587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 16:53:31.795610   18587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 16:53:31.808951   18587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 16:53:31.914199   18587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 16:53:32.063850   18587 docker.go:233] disabling docker service ...
	I0819 16:53:32.063923   18587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 16:53:32.077510   18587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 16:53:32.089548   18587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 16:53:32.220361   18587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 16:53:32.347506   18587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 16:53:32.359855   18587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 16:53:32.376158   18587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 16:53:32.376221   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.385180   18587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 16:53:32.385233   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.394239   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.403073   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.411946   18587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 16:53:32.421088   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.430048   18587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.446015   18587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 16:53:32.455316   18587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 16:53:32.463762   18587 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 16:53:32.463818   18587 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 16:53:32.475760   18587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 16:53:32.484331   18587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 16:53:32.615165   18587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 16:53:32.744212   18587 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 16:53:32.744298   18587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 16:53:32.748582   18587 start.go:563] Will wait 60s for crictl version
	I0819 16:53:32.748638   18587 ssh_runner.go:195] Run: which crictl
	I0819 16:53:32.752028   18587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 16:53:32.786462   18587 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 16:53:32.786579   18587 ssh_runner.go:195] Run: crio --version
	I0819 16:53:32.813073   18587 ssh_runner.go:195] Run: crio --version
	I0819 16:53:32.841172   18587 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 16:53:32.842568   18587 main.go:141] libmachine: (addons-825243) Calling .GetIP
	I0819 16:53:32.844961   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:32.845237   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:32.845261   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:32.845504   18587 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 16:53:32.849155   18587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 16:53:32.860951   18587 kubeadm.go:883] updating cluster {Name:addons-825243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-825243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 16:53:32.861102   18587 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 16:53:32.861172   18587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 16:53:32.894853   18587 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 16:53:32.894922   18587 ssh_runner.go:195] Run: which lz4
	I0819 16:53:32.898456   18587 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 16:53:32.902055   18587 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 16:53:32.902077   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 16:53:34.032832   18587 crio.go:462] duration metric: took 1.134399043s to copy over tarball
	I0819 16:53:34.032892   18587 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 16:53:36.098175   18587 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.065253953s)
	I0819 16:53:36.098204   18587 crio.go:469] duration metric: took 2.065349568s to extract the tarball
	I0819 16:53:36.098210   18587 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 16:53:36.134302   18587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 16:53:36.172698   18587 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 16:53:36.172720   18587 cache_images.go:84] Images are preloaded, skipping loading
	I0819 16:53:36.172728   18587 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.31.0 crio true true} ...
	I0819 16:53:36.172841   18587 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-825243 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-825243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 16:53:36.172908   18587 ssh_runner.go:195] Run: crio config
	I0819 16:53:36.216505   18587 cni.go:84] Creating CNI manager for ""
	I0819 16:53:36.216522   18587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 16:53:36.216533   18587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 16:53:36.216553   18587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-825243 NodeName:addons-825243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 16:53:36.216732   18587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-825243"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 16:53:36.216809   18587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 16:53:36.226099   18587 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 16:53:36.226168   18587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 16:53:36.234940   18587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 16:53:36.249484   18587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 16:53:36.264192   18587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 16:53:36.279091   18587 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0819 16:53:36.282440   18587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 16:53:36.293119   18587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 16:53:36.409356   18587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 16:53:36.425083   18587 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243 for IP: 192.168.39.129
	I0819 16:53:36.425107   18587 certs.go:194] generating shared ca certs ...
	I0819 16:53:36.425129   18587 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.425288   18587 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 16:53:36.554684   18587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt ...
	I0819 16:53:36.554712   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt: {Name:mkd8aac57f38305eebc3e70a3c299ec6319330da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.554878   18587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key ...
	I0819 16:53:36.554889   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key: {Name:mkb11833b68a299c4cc435820a97207697d835b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.554957   18587 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 16:53:36.734115   18587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt ...
	I0819 16:53:36.734143   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt: {Name:mk66fe69cc91ada8d79a785e88eb420be90ed98f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.734286   18587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key ...
	I0819 16:53:36.734298   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key: {Name:mk5aea4d87875f2ef5a82db7cdaada987d64c4ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.734362   18587 certs.go:256] generating profile certs ...
	I0819 16:53:36.734411   18587 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.key
	I0819 16:53:36.734431   18587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt with IP's: []
	I0819 16:53:36.783711   18587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt ...
	I0819 16:53:36.783735   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: {Name:mkf1e36c1ca10fb8a2556accec6a5bea26a80421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.783870   18587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.key ...
	I0819 16:53:36.783880   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.key: {Name:mkc7eda253cff4b6cd49b3cea00744ca86cf5a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.783940   18587 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key.ac98efbf
	I0819 16:53:36.783957   18587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt.ac98efbf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.129]
	I0819 16:53:36.957886   18587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt.ac98efbf ...
	I0819 16:53:36.957917   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt.ac98efbf: {Name:mk458cd92693e214fb34fbded3481267662e7b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.958074   18587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key.ac98efbf ...
	I0819 16:53:36.958086   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key.ac98efbf: {Name:mkbc354f736341260a433d039e888aaf67f14dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:36.958153   18587 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt.ac98efbf -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt
	I0819 16:53:36.958237   18587 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key.ac98efbf -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key
	I0819 16:53:36.958285   18587 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.key
	I0819 16:53:36.958316   18587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.crt with IP's: []
	I0819 16:53:37.233250   18587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.crt ...
	I0819 16:53:37.233279   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.crt: {Name:mk8cf6ef0fb7e7386eac5532fa835bd2720bd30e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:37.233471   18587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.key ...
	I0819 16:53:37.233489   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.key: {Name:mkeaa40640a707f170ff9c5f21c5f43bdb8d2e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:37.233703   18587 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 16:53:37.233746   18587 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 16:53:37.233781   18587 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 16:53:37.233811   18587 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 16:53:37.234358   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 16:53:37.259542   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 16:53:37.296937   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 16:53:37.325078   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 16:53:37.345832   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 16:53:37.371630   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 16:53:37.392630   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 16:53:37.414151   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 16:53:37.435517   18587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 16:53:37.456119   18587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 16:53:37.470525   18587 ssh_runner.go:195] Run: openssl version
	I0819 16:53:37.475661   18587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 16:53:37.485120   18587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 16:53:37.489127   18587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 16:53:37.489176   18587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 16:53:37.494503   18587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 16:53:37.503908   18587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 16:53:37.507557   18587 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 16:53:37.507605   18587 kubeadm.go:392] StartCluster: {Name:addons-825243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-825243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 16:53:37.507672   18587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 16:53:37.507707   18587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 16:53:37.541362   18587 cri.go:89] found id: ""
	I0819 16:53:37.541421   18587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 16:53:37.550424   18587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 16:53:37.559168   18587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 16:53:37.567609   18587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 16:53:37.567626   18587 kubeadm.go:157] found existing configuration files:
	
	I0819 16:53:37.567672   18587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 16:53:37.575702   18587 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 16:53:37.575751   18587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 16:53:37.584198   18587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 16:53:37.592464   18587 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 16:53:37.592517   18587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 16:53:37.600974   18587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 16:53:37.609113   18587 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 16:53:37.609173   18587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 16:53:37.617676   18587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 16:53:37.625742   18587 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 16:53:37.625802   18587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 16:53:37.634307   18587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 16:53:37.687391   18587 kubeadm.go:310] W0819 16:53:37.668313     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 16:53:37.688033   18587 kubeadm.go:310] W0819 16:53:37.669257     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 16:53:37.785723   18587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 16:53:48.015049   18587 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 16:53:48.015099   18587 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 16:53:48.015219   18587 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 16:53:48.015392   18587 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 16:53:48.015514   18587 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 16:53:48.015602   18587 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 16:53:48.017055   18587 out.go:235]   - Generating certificates and keys ...
	I0819 16:53:48.017148   18587 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 16:53:48.017229   18587 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 16:53:48.017359   18587 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 16:53:48.017438   18587 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 16:53:48.017538   18587 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 16:53:48.017629   18587 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 16:53:48.017709   18587 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 16:53:48.017870   18587 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-825243 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0819 16:53:48.017947   18587 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 16:53:48.018106   18587 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-825243 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0819 16:53:48.018171   18587 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 16:53:48.018224   18587 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 16:53:48.018264   18587 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 16:53:48.018313   18587 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 16:53:48.018366   18587 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 16:53:48.018417   18587 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 16:53:48.018461   18587 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 16:53:48.018521   18587 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 16:53:48.018594   18587 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 16:53:48.018668   18587 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 16:53:48.018750   18587 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 16:53:48.020017   18587 out.go:235]   - Booting up control plane ...
	I0819 16:53:48.020124   18587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 16:53:48.020197   18587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 16:53:48.020253   18587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 16:53:48.020347   18587 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 16:53:48.020462   18587 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 16:53:48.020500   18587 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 16:53:48.020596   18587 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 16:53:48.020704   18587 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 16:53:48.020780   18587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.123782ms
	I0819 16:53:48.020861   18587 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 16:53:48.020958   18587 kubeadm.go:310] [api-check] The API server is healthy after 5.00138442s
	I0819 16:53:48.021067   18587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 16:53:48.021175   18587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 16:53:48.021246   18587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 16:53:48.021427   18587 kubeadm.go:310] [mark-control-plane] Marking the node addons-825243 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 16:53:48.021480   18587 kubeadm.go:310] [bootstrap-token] Using token: lfkoml.a5tqy6xdm24vx0tr
	I0819 16:53:48.022860   18587 out.go:235]   - Configuring RBAC rules ...
	I0819 16:53:48.022972   18587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 16:53:48.023076   18587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 16:53:48.023210   18587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 16:53:48.023328   18587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 16:53:48.023442   18587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 16:53:48.023517   18587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 16:53:48.023612   18587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 16:53:48.023657   18587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 16:53:48.023701   18587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 16:53:48.023709   18587 kubeadm.go:310] 
	I0819 16:53:48.023757   18587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 16:53:48.023763   18587 kubeadm.go:310] 
	I0819 16:53:48.023847   18587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 16:53:48.023856   18587 kubeadm.go:310] 
	I0819 16:53:48.023901   18587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 16:53:48.023955   18587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 16:53:48.024004   18587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 16:53:48.024010   18587 kubeadm.go:310] 
	I0819 16:53:48.024053   18587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 16:53:48.024059   18587 kubeadm.go:310] 
	I0819 16:53:48.024096   18587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 16:53:48.024102   18587 kubeadm.go:310] 
	I0819 16:53:48.024147   18587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 16:53:48.024209   18587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 16:53:48.024279   18587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 16:53:48.024290   18587 kubeadm.go:310] 
	I0819 16:53:48.024374   18587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 16:53:48.024460   18587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 16:53:48.024474   18587 kubeadm.go:310] 
	I0819 16:53:48.024593   18587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lfkoml.a5tqy6xdm24vx0tr \
	I0819 16:53:48.024741   18587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 16:53:48.024786   18587 kubeadm.go:310] 	--control-plane 
	I0819 16:53:48.024798   18587 kubeadm.go:310] 
	I0819 16:53:48.024882   18587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 16:53:48.024890   18587 kubeadm.go:310] 
	I0819 16:53:48.024961   18587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lfkoml.a5tqy6xdm24vx0tr \
	I0819 16:53:48.025061   18587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 16:53:48.025070   18587 cni.go:84] Creating CNI manager for ""
	I0819 16:53:48.025077   18587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 16:53:48.026496   18587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 16:53:48.027525   18587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 16:53:48.037497   18587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 16:53:48.056679   18587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 16:53:48.056784   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-825243 minikube.k8s.io/updated_at=2024_08_19T16_53_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=addons-825243 minikube.k8s.io/primary=true
	I0819 16:53:48.056787   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:48.163438   18587 ops.go:34] apiserver oom_adj: -16
	I0819 16:53:48.163488   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:48.664052   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:49.163942   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:49.664293   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:50.164584   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:50.664515   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:51.163769   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:51.663609   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:52.163943   18587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 16:53:52.246774   18587 kubeadm.go:1113] duration metric: took 4.190075838s to wait for elevateKubeSystemPrivileges
	I0819 16:53:52.246811   18587 kubeadm.go:394] duration metric: took 14.739210377s to StartCluster
	I0819 16:53:52.246834   18587 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:52.246971   18587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 16:53:52.247390   18587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 16:53:52.247561   18587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 16:53:52.247582   18587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 16:53:52.247656   18587 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 16:53:52.247757   18587 addons.go:69] Setting yakd=true in profile "addons-825243"
	I0819 16:53:52.247773   18587 addons.go:69] Setting ingress=true in profile "addons-825243"
	I0819 16:53:52.247791   18587 addons.go:234] Setting addon yakd=true in "addons-825243"
	I0819 16:53:52.247802   18587 addons.go:234] Setting addon ingress=true in "addons-825243"
	I0819 16:53:52.247800   18587 config.go:182] Loaded profile config "addons-825243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 16:53:52.247795   18587 addons.go:69] Setting registry=true in profile "addons-825243"
	I0819 16:53:52.247815   18587 addons.go:69] Setting ingress-dns=true in profile "addons-825243"
	I0819 16:53:52.247823   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247833   18587 addons.go:234] Setting addon registry=true in "addons-825243"
	I0819 16:53:52.247836   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247798   18587 addons.go:69] Setting inspektor-gadget=true in profile "addons-825243"
	I0819 16:53:52.247850   18587 addons.go:234] Setting addon ingress-dns=true in "addons-825243"
	I0819 16:53:52.247863   18587 addons.go:234] Setting addon inspektor-gadget=true in "addons-825243"
	I0819 16:53:52.247872   18587 addons.go:69] Setting metrics-server=true in profile "addons-825243"
	I0819 16:53:52.247883   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247890   18587 addons.go:234] Setting addon metrics-server=true in "addons-825243"
	I0819 16:53:52.247895   18587 addons.go:69] Setting default-storageclass=true in profile "addons-825243"
	I0819 16:53:52.247901   18587 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-825243"
	I0819 16:53:52.247900   18587 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-825243"
	I0819 16:53:52.247918   18587 addons.go:69] Setting gcp-auth=true in profile "addons-825243"
	I0819 16:53:52.247923   18587 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-825243"
	I0819 16:53:52.247924   18587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-825243"
	I0819 16:53:52.247928   18587 addons.go:69] Setting volumesnapshots=true in profile "addons-825243"
	I0819 16:53:52.247928   18587 addons.go:69] Setting storage-provisioner=true in profile "addons-825243"
	I0819 16:53:52.247935   18587 mustload.go:65] Loading cluster: addons-825243
	I0819 16:53:52.247946   18587 addons.go:234] Setting addon volumesnapshots=true in "addons-825243"
	I0819 16:53:52.247947   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247948   18587 addons.go:234] Setting addon storage-provisioner=true in "addons-825243"
	I0819 16:53:52.247963   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247967   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247982   18587 addons.go:69] Setting helm-tiller=true in profile "addons-825243"
	I0819 16:53:52.248000   18587 addons.go:234] Setting addon helm-tiller=true in "addons-825243"
	I0819 16:53:52.248014   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.248098   18587 config.go:182] Loaded profile config "addons-825243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 16:53:52.248307   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248319   18587 addons.go:69] Setting cloud-spanner=true in profile "addons-825243"
	I0819 16:53:52.248324   18587 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-825243"
	I0819 16:53:52.248344   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248349   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248362   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248362   18587 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-825243"
	I0819 16:53:52.248378   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248380   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248388   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247907   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.248417   18587 addons.go:234] Setting addon cloud-spanner=true in "addons-825243"
	I0819 16:53:52.247919   18587 addons.go:69] Setting volcano=true in profile "addons-825243"
	I0819 16:53:52.248449   18587 addons.go:234] Setting addon volcano=true in "addons-825243"
	I0819 16:53:52.248475   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.248308   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248503   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248688   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248729   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.247919   18587 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-825243"
	I0819 16:53:52.248738   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248764   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.247864   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.247890   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.248307   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249034   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248311   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248829   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.248731   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249107   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249113   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249079   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249179   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249199   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.248475   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.248450   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249266   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249427   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249470   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249449   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.249492   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249503   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.249515   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.257405   18587 out.go:177] * Verifying Kubernetes components...
	I0819 16:53:52.259192   18587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 16:53:52.269000   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41427
	I0819 16:53:52.269027   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0819 16:53:52.269009   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
	I0819 16:53:52.269414   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.269509   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
	I0819 16:53:52.269770   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.269956   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.270049   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.270064   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.270089   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.270357   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.270375   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.270434   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.270547   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.270563   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.271007   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.271041   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.271140   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.271336   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.271351   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.271400   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.289700   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.289737   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.290390   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44991
	I0819 16:53:52.290573   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0819 16:53:52.290617   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.290647   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.290686   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I0819 16:53:52.290794   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.291204   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.291237   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.291735   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.291766   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.297535   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.297631   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.297673   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.298235   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.298252   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.298366   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.298375   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.298487   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.298497   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.298676   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.298839   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.299243   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.299265   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.304874   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.305335   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.305369   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.308505   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.308550   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.321276   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
	I0819 16:53:52.322469   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.323142   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.323161   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.323560   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.323764   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.326948   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0819 16:53:52.327130   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35795
	I0819 16:53:52.327552   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.328079   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.328095   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.328431   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.328556   18587 addons.go:234] Setting addon default-storageclass=true in "addons-825243"
	I0819 16:53:52.328614   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.329017   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.329058   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.329063   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.329091   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.329738   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I0819 16:53:52.330170   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.330689   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.330706   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.331045   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.331577   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.331616   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.331811   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0819 16:53:52.332219   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.332687   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.332706   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.333361   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.333435   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.334025   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.334061   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.334949   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0819 16:53:52.335486   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.335961   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.335978   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.336291   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.336441   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.337282   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.337309   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.337767   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.338337   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.338370   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.338540   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.338748   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:53:52.338769   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:53:52.340443   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:53:52.340470   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0819 16:53:52.340493   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:53:52.340513   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:53:52.340524   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:53:52.340533   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:53:52.340715   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:53:52.340742   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 16:53:52.340845   18587 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 16:53:52.341354   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.342029   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.342047   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.342539   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.343110   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.343137   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.348422   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44199
	I0819 16:53:52.349007   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.349552   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.349570   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.349970   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.350204   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.351029   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0819 16:53:52.351183   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0819 16:53:52.351592   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.352036   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.352094   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.352110   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.352495   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.352581   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.352600   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.352700   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.353307   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.353492   18587 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-825243"
	I0819 16:53:52.353531   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.353538   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.353910   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.353966   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.355421   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.356084   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.357877   18587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 16:53:52.357877   18587 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 16:53:52.359254   18587 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 16:53:52.359270   18587 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 16:53:52.359291   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.359442   18587 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 16:53:52.359454   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 16:53:52.359468   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.362785   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37063
	I0819 16:53:52.363091   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.363434   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.363636   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.363673   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.363994   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.364012   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.364077   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.364336   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.364350   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.364376   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.364548   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.364680   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.364700   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.364728   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.364938   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.364983   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.365134   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.365279   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.365397   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.367147   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.369386   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0819 16:53:52.369676   18587 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 16:53:52.370994   18587 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 16:53:52.371011   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 16:53:52.371028   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.371075   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0819 16:53:52.371702   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34981
	I0819 16:53:52.372027   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.372115   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.372906   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.372924   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.373358   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.373432   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34667
	I0819 16:53:52.373588   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.373897   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.374058   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.374075   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.374126   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.374469   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.374520   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.374530   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.374534   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.374545   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.374631   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.374683   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.374857   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.374903   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.375332   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.375370   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.375714   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.375878   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.375979   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.376868   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I0819 16:53:52.377107   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.377129   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.377190   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.377747   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.377771   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.377850   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.378592   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.378720   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34203
	I0819 16:53:52.379001   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.379076   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.379541   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.379581   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.379629   18587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 16:53:52.379863   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.379989   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.380029   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.380845   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.380871   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34469
	I0819 16:53:52.380879   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.381282   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.381530   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:52.381848   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.381995   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.382006   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.382187   18587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 16:53:52.382540   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0819 16:53:52.382941   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.383388   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.383362   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.383434   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.383482   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.383519   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.383781   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.383789   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.383942   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.384610   18587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 16:53:52.384707   18587 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 16:53:52.385387   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.385661   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.386016   18587 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 16:53:52.386034   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 16:53:52.386053   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.386319   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.386322   18587 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 16:53:52.386379   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 16:53:52.386395   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.387521   18587 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 16:53:52.388349   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 16:53:52.389202   18587 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 16:53:52.389366   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.389367   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40975
	I0819 16:53:52.389887   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43763
	I0819 16:53:52.389917   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.389984   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.390004   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.390094   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.390206   18587 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 16:53:52.390218   18587 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 16:53:52.390235   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.390262   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.390303   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.390336   18587 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 16:53:52.390468   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.390602   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.390613   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.390732   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.390742   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.390936   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.391560   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 16:53:52.391573   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.391678   18587 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 16:53:52.391687   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 16:53:52.391701   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.391736   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.392969   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.393035   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.393061   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.393078   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.393175   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.393352   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.393561   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.393609   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.393813   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.394111   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.394153   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.394666   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.394738   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.394864   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.395009   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.395101   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.395184   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 16:53:52.395211   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.395888   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 16:53:52.396275   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.397175   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 16:53:52.397297   18587 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 16:53:52.397316   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.397983   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.398495   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.398515   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.398657   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38035
	I0819 16:53:52.398747   18587 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 16:53:52.398811   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.398877   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 16:53:52.399702   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.399707   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.399795   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I0819 16:53:52.399912   18587 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 16:53:52.399929   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.399933   18587 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 16:53:52.399949   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.400051   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.400413   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.400432   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.400504   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.401182   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.401201   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.401252   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.401302   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 16:53:52.401893   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.402098   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.402155   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.402399   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.402417   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.402807   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:52.402838   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:52.402987   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0819 16:53:52.403172   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.403332   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.403454   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.403490   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 16:53:52.403500   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.403564   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.403706   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.403963   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.403977   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.404066   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.404362   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.404518   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.404584   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.404607   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.404878   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.405053   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.405212   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.405349   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.405912   18587 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 16:53:52.405962   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 16:53:52.405985   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.406198   18587 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 16:53:52.406213   18587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 16:53:52.406228   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.407572   18587 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 16:53:52.407587   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 16:53:52.407604   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.409008   18587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 16:53:52.409677   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.410130   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 16:53:52.410149   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 16:53:52.410175   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.410208   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.410180   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.410893   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.411119   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.411384   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.411527   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.411833   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41579
	I0819 16:53:52.412176   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.412364   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.412854   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.412872   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.413049   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.413246   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.413379   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.413490   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.413829   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.413842   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.414149   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.414154   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.414304   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.414552   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.414570   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.414724   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.414876   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.415031   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.415142   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.415941   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.416879   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
	I0819 16:53:52.417217   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.417594   18587 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 16:53:52.417653   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.417667   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.417970   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.418138   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.418807   18587 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 16:53:52.418823   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 16:53:52.418838   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.421739   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.422108   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.422136   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.422316   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.422471   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.422625   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.422751   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.425817   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41911
	I0819 16:53:52.426091   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:52.426598   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:52.426617   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:52.427019   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:52.427198   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:52.428619   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:52.430161   18587 out.go:177]   - Using image docker.io/busybox:stable
	I0819 16:53:52.431431   18587 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 16:53:52.432522   18587 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 16:53:52.432534   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 16:53:52.432545   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:52.435523   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.435891   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:52.435916   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:52.436148   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:52.436329   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:52.436485   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:52.436620   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:52.723069   18587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 16:53:52.723397   18587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 16:53:52.754991   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 16:53:52.757228   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 16:53:52.778403   18587 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 16:53:52.778427   18587 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 16:53:52.794445   18587 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 16:53:52.794465   18587 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 16:53:52.839148   18587 node_ready.go:35] waiting up to 6m0s for node "addons-825243" to be "Ready" ...
	I0819 16:53:52.842565   18587 node_ready.go:49] node "addons-825243" has status "Ready":"True"
	I0819 16:53:52.842599   18587 node_ready.go:38] duration metric: took 3.407154ms for node "addons-825243" to be "Ready" ...
	I0819 16:53:52.842611   18587 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 16:53:52.842819   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 16:53:52.850460   18587 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace to be "Ready" ...
	I0819 16:53:52.878249   18587 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 16:53:52.878273   18587 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 16:53:52.909030   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 16:53:52.910812   18587 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 16:53:52.910824   18587 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 16:53:52.918852   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 16:53:52.929898   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 16:53:52.929924   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 16:53:52.944770   18587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 16:53:52.944794   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 16:53:52.969924   18587 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 16:53:52.969944   18587 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 16:53:52.978183   18587 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 16:53:52.978209   18587 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 16:53:52.995924   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 16:53:52.997191   18587 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 16:53:52.997214   18587 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 16:53:53.006416   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 16:53:53.095905   18587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 16:53:53.095930   18587 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 16:53:53.139898   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 16:53:53.139920   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 16:53:53.142407   18587 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 16:53:53.142429   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 16:53:53.161616   18587 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 16:53:53.161645   18587 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 16:53:53.194960   18587 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 16:53:53.194990   18587 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 16:53:53.212286   18587 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 16:53:53.212316   18587 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 16:53:53.235584   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 16:53:53.292624   18587 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 16:53:53.292664   18587 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 16:53:53.307662   18587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 16:53:53.307689   18587 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 16:53:53.323029   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 16:53:53.323057   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 16:53:53.359847   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 16:53:53.394497   18587 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 16:53:53.394530   18587 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 16:53:53.418255   18587 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 16:53:53.418285   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 16:53:53.515990   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 16:53:53.520691   18587 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 16:53:53.520714   18587 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 16:53:53.577771   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 16:53:53.586591   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 16:53:53.586618   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 16:53:53.702106   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 16:53:53.702129   18587 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 16:53:53.779788   18587 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 16:53:53.779814   18587 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 16:53:53.806029   18587 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 16:53:53.806051   18587 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 16:53:53.871317   18587 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 16:53:53.871337   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 16:53:53.966201   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 16:53:54.004220   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 16:53:54.004242   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 16:53:54.078714   18587 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 16:53:54.078736   18587 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 16:53:54.377143   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 16:53:54.377165   18587 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 16:53:54.401692   18587 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 16:53:54.401721   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 16:53:54.705088   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 16:53:54.705109   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 16:53:54.745336   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 16:53:54.856481   18587 pod_ready.go:103] pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace has status "Ready":"False"
	I0819 16:53:54.952417   18587 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.228986555s)
	I0819 16:53:54.952456   18587 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 16:53:55.051911   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 16:53:55.051942   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 16:53:55.403064   18587 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 16:53:55.403097   18587 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 16:53:55.463407   18587 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-825243" context rescaled to 1 replicas
	I0819 16:53:55.824922   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 16:53:56.963950   18587 pod_ready.go:103] pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace has status "Ready":"False"
	I0819 16:53:57.099976   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.344955977s)
	I0819 16:53:57.100026   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:53:57.100042   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:53:57.100340   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:53:57.100386   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:53:57.100405   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:53:57.100422   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:53:57.100434   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:53:57.100728   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:53:57.100785   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:53:57.100807   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:53:59.365067   18587 pod_ready.go:103] pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace has status "Ready":"False"
	I0819 16:53:59.454670   18587 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 16:53:59.454707   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:59.457906   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:59.458386   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:59.458411   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:59.458592   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:59.458820   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:59.458973   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:59.459097   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:53:59.639561   18587 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 16:53:59.675965   18587 addons.go:234] Setting addon gcp-auth=true in "addons-825243"
	I0819 16:53:59.676023   18587 host.go:66] Checking if "addons-825243" exists ...
	I0819 16:53:59.676331   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:59.676372   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:59.691969   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46025
	I0819 16:53:59.692366   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:59.692955   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:59.692980   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:59.693278   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:59.693730   18587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 16:53:59.693764   18587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 16:53:59.708973   18587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42439
	I0819 16:53:59.709339   18587 main.go:141] libmachine: () Calling .GetVersion
	I0819 16:53:59.709777   18587 main.go:141] libmachine: Using API Version  1
	I0819 16:53:59.709798   18587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 16:53:59.710086   18587 main.go:141] libmachine: () Calling .GetMachineName
	I0819 16:53:59.710269   18587 main.go:141] libmachine: (addons-825243) Calling .GetState
	I0819 16:53:59.711758   18587 main.go:141] libmachine: (addons-825243) Calling .DriverName
	I0819 16:53:59.711949   18587 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 16:53:59.711967   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHHostname
	I0819 16:53:59.714948   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:59.715390   18587 main.go:141] libmachine: (addons-825243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:11:a2", ip: ""} in network mk-addons-825243: {Iface:virbr1 ExpiryTime:2024-08-19 17:53:23 +0000 UTC Type:0 Mac:52:54:00:fc:11:a2 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-825243 Clientid:01:52:54:00:fc:11:a2}
	I0819 16:53:59.715417   18587 main.go:141] libmachine: (addons-825243) DBG | domain addons-825243 has defined IP address 192.168.39.129 and MAC address 52:54:00:fc:11:a2 in network mk-addons-825243
	I0819 16:53:59.715597   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHPort
	I0819 16:53:59.715767   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHKeyPath
	I0819 16:53:59.715922   18587 main.go:141] libmachine: (addons-825243) Calling .GetSSHUsername
	I0819 16:53:59.716061   18587 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/addons-825243/id_rsa Username:docker}
	I0819 16:54:00.381337   18587 pod_ready.go:93] pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.381357   18587 pod_ready.go:82] duration metric: took 7.530875403s for pod "coredns-6f6b679f8f-g248k" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.381366   18587 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-l9wkm" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.437753   18587 pod_ready.go:93] pod "coredns-6f6b679f8f-l9wkm" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.437774   18587 pod_ready.go:82] duration metric: took 56.401881ms for pod "coredns-6f6b679f8f-l9wkm" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.437785   18587 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.459340   18587 pod_ready.go:93] pod "etcd-addons-825243" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.459358   18587 pod_ready.go:82] duration metric: took 21.567726ms for pod "etcd-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.459367   18587 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.462466   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.61961915s)
	I0819 16:54:00.462499   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.553441376s)
	I0819 16:54:00.462515   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462528   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462532   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462545   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462633   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.543759153s)
	I0819 16:54:00.462666   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462678   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462725   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.466764137s)
	I0819 16:54:00.462752   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462754   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.456315324s)
	I0819 16:54:00.462761   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462773   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462782   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462819   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.227208151s)
	I0819 16:54:00.462843   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.102966069s)
	I0819 16:54:00.462849   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462858   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462860   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462868   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.462951   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.946935113s)
	I0819 16:54:00.462972   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.462981   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463045   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.885241517s)
	I0819 16:54:00.463060   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.463070   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463202   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.49696418s)
	W0819 16:54:00.463224   18587 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 16:54:00.463246   18587 retry.go:31] will retry after 289.430829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 16:54:00.463261   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.463280   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.463290   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.463298   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463346   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.717956752s)
	I0819 16:54:00.463363   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.463374   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463374   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.463383   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.463392   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.463399   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463427   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.463451   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.463458   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.463466   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.463473   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.463564   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.463597   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.463605   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.463614   18587 addons.go:475] Verifying addon metrics-server=true in "addons-825243"
	I0819 16:54:00.463647   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.463657   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.463665   18587 addons.go:475] Verifying addon registry=true in "addons-825243"
	I0819 16:54:00.464731   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.464784   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.464793   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.464886   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.464904   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.464913   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.464920   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.465339   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.465375   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.465383   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466202   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466211   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466217   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466222   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466227   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466232   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466235   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466242   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466302   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.466331   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466338   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466346   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466353   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466479   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.466513   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466523   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.709268383s)
	I0819 16:54:00.466533   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466544   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466555   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466599   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.466624   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466634   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466642   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466669   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466706   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.466776   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466785   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466793   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466801   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466881   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.466916   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.466924   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.466933   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.466940   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.466987   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.467030   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.467037   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.467046   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.467060   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.467073   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.467122   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.467144   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.467151   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.467323   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.467331   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.467363   18587 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-825243 service yakd-dashboard -n yakd-dashboard
	
	I0819 16:54:00.468680   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.468715   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.468725   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.468858   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.468867   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.468880   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.468885   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.468869   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.468916   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.468925   18587 addons.go:475] Verifying addon ingress=true in "addons-825243"
	I0819 16:54:00.470319   18587 out.go:177] * Verifying ingress addon...
	I0819 16:54:00.470449   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.470446   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.470467   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.471348   18587 out.go:177] * Verifying registry addon...
	I0819 16:54:00.472285   18587 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 16:54:00.473282   18587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 16:54:00.486887   18587 pod_ready.go:93] pod "kube-apiserver-addons-825243" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.486903   18587 pod_ready.go:82] duration metric: took 27.530282ms for pod "kube-apiserver-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.486913   18587 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.496549   18587 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 16:54:00.496570   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:00.498865   18587 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 16:54:00.498881   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:00.522453   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.522475   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.522829   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.522848   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.522860   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	W0819 16:54:00.522951   18587 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 16:54:00.530837   18587 pod_ready.go:93] pod "kube-controller-manager-addons-825243" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.530865   18587 pod_ready.go:82] duration metric: took 43.94413ms for pod "kube-controller-manager-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.530878   18587 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dmfp2" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.531140   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:00.531162   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:00.531425   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:00.531484   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:00.531501   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:00.752819   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 16:54:00.762033   18587 pod_ready.go:93] pod "kube-proxy-dmfp2" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:00.762055   18587 pod_ready.go:82] duration metric: took 231.170313ms for pod "kube-proxy-dmfp2" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.762065   18587 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:00.984631   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:00.984797   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:01.165953   18587 pod_ready.go:93] pod "kube-scheduler-addons-825243" in "kube-system" namespace has status "Ready":"True"
	I0819 16:54:01.165976   18587 pod_ready.go:82] duration metric: took 403.904172ms for pod "kube-scheduler-addons-825243" in "kube-system" namespace to be "Ready" ...
	I0819 16:54:01.165986   18587 pod_ready.go:39] duration metric: took 8.323356323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 16:54:01.166005   18587 api_server.go:52] waiting for apiserver process to appear ...
	I0819 16:54:01.166064   18587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 16:54:01.352384   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.527401686s)
	I0819 16:54:01.352409   18587 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.640438079s)
	I0819 16:54:01.352442   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:01.352465   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:01.352764   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:01.352801   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:01.352813   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:01.352824   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:01.352847   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:01.353133   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:01.353148   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:01.353162   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:01.353176   18587 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-825243"
	I0819 16:54:01.354092   18587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 16:54:01.354988   18587 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 16:54:01.356593   18587 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 16:54:01.357287   18587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 16:54:01.357739   18587 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 16:54:01.357753   18587 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 16:54:01.378507   18587 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 16:54:01.378528   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:01.480229   18587 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 16:54:01.480256   18587 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 16:54:01.550553   18587 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 16:54:01.550581   18587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 16:54:01.608167   18587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 16:54:01.768518   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:01.769180   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:01.863920   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:01.978773   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:01.979317   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:02.362764   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:02.476765   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:02.480489   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:02.699215   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.946345491s)
	I0819 16:54:02.699244   18587 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.533156738s)
	I0819 16:54:02.699272   18587 api_server.go:72] duration metric: took 10.451668936s to wait for apiserver process to appear ...
	I0819 16:54:02.699280   18587 api_server.go:88] waiting for apiserver healthz status ...
	I0819 16:54:02.699284   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:02.699301   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:02.699304   18587 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0819 16:54:02.699610   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:02.699715   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:02.699734   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:02.699744   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:02.699759   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:02.699986   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:02.700004   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:02.703854   18587 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0819 16:54:02.705134   18587 api_server.go:141] control plane version: v1.31.0
	I0819 16:54:02.705154   18587 api_server.go:131] duration metric: took 5.864705ms to wait for apiserver health ...
	I0819 16:54:02.705162   18587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 16:54:02.719240   18587 system_pods.go:59] 19 kube-system pods found
	I0819 16:54:02.719265   18587 system_pods.go:61] "coredns-6f6b679f8f-g248k" [e5b8dc0c-d315-406d-82d5-c89c95dcd0f5] Running
	I0819 16:54:02.719271   18587 system_pods.go:61] "coredns-6f6b679f8f-l9wkm" [82eb534d-3fdc-4c3f-8789-2617f4507636] Running
	I0819 16:54:02.719277   18587 system_pods.go:61] "csi-hostpath-attacher-0" [70c80be5-ed0a-49fb-b287-3bac65011256] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 16:54:02.719283   18587 system_pods.go:61] "csi-hostpath-resizer-0" [bcbd845e-9dc1-42d3-ac75-15e439c7f9df] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 16:54:02.719289   18587 system_pods.go:61] "csi-hostpathplugin-bnwxn" [fd70584a-3d87-4343-9f83-29d5b98ce25e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 16:54:02.719293   18587 system_pods.go:61] "etcd-addons-825243" [f36e58d0-0cea-4171-a5ad-10ef0212a1ae] Running
	I0819 16:54:02.719297   18587 system_pods.go:61] "kube-apiserver-addons-825243" [3bfce86d-e822-436d-8eb5-11b42d736b53] Running
	I0819 16:54:02.719301   18587 system_pods.go:61] "kube-controller-manager-addons-825243" [27b791c8-efee-40e1-8039-9993e903c434] Running
	I0819 16:54:02.719309   18587 system_pods.go:61] "kube-ingress-dns-minikube" [4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0819 16:54:02.719315   18587 system_pods.go:61] "kube-proxy-dmfp2" [f676c55d-f283-4321-9815-02303a82a9c9] Running
	I0819 16:54:02.719321   18587 system_pods.go:61] "kube-scheduler-addons-825243" [bc4ff467-bf0c-4e8d-aae2-8e2363388539] Running
	I0819 16:54:02.719328   18587 system_pods.go:61] "metrics-server-8988944d9-j2w2h" [ba217649-2efe-4c98-8076-d73d63794bd7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 16:54:02.719337   18587 system_pods.go:61] "nvidia-device-plugin-daemonset-vcml2" [8b9d9981-f3de-4307-9e9f-2ee8621a11c8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0819 16:54:02.719355   18587 system_pods.go:61] "registry-6fb4cdfc84-4g2dz" [eda791b5-556d-4ac5-b370-ea875a1d634a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0819 16:54:02.719370   18587 system_pods.go:61] "registry-proxy-s2gcq" [59c4a419-cfc5-4b2f-964c-8a0b25b0d01c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 16:54:02.719403   18587 system_pods.go:61] "snapshot-controller-56fcc65765-th5xc" [643f4b21-177b-46f0-8d81-5a2fa7141613] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 16:54:02.719417   18587 system_pods.go:61] "snapshot-controller-56fcc65765-w9w56" [b0a5580b-10bf-4aa1-93f5-30ffb08f129e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 16:54:02.719424   18587 system_pods.go:61] "storage-provisioner" [31d6dc33-8567-4b1a-8db4-36f09be7e471] Running
	I0819 16:54:02.719434   18587 system_pods.go:61] "tiller-deploy-b48cc5f79-wr8hg" [f1ed9b9d-e3d1-4e09-b94f-f29a67830f09] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 16:54:02.719444   18587 system_pods.go:74] duration metric: took 14.277028ms to wait for pod list to return data ...
	I0819 16:54:02.719455   18587 default_sa.go:34] waiting for default service account to be created ...
	I0819 16:54:02.728311   18587 default_sa.go:45] found service account: "default"
	I0819 16:54:02.728330   18587 default_sa.go:55] duration metric: took 8.869878ms for default service account to be created ...
	I0819 16:54:02.728339   18587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 16:54:02.739293   18587 system_pods.go:86] 19 kube-system pods found
	I0819 16:54:02.739316   18587 system_pods.go:89] "coredns-6f6b679f8f-g248k" [e5b8dc0c-d315-406d-82d5-c89c95dcd0f5] Running
	I0819 16:54:02.739322   18587 system_pods.go:89] "coredns-6f6b679f8f-l9wkm" [82eb534d-3fdc-4c3f-8789-2617f4507636] Running
	I0819 16:54:02.739328   18587 system_pods.go:89] "csi-hostpath-attacher-0" [70c80be5-ed0a-49fb-b287-3bac65011256] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 16:54:02.739334   18587 system_pods.go:89] "csi-hostpath-resizer-0" [bcbd845e-9dc1-42d3-ac75-15e439c7f9df] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 16:54:02.739341   18587 system_pods.go:89] "csi-hostpathplugin-bnwxn" [fd70584a-3d87-4343-9f83-29d5b98ce25e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 16:54:02.739345   18587 system_pods.go:89] "etcd-addons-825243" [f36e58d0-0cea-4171-a5ad-10ef0212a1ae] Running
	I0819 16:54:02.739349   18587 system_pods.go:89] "kube-apiserver-addons-825243" [3bfce86d-e822-436d-8eb5-11b42d736b53] Running
	I0819 16:54:02.739353   18587 system_pods.go:89] "kube-controller-manager-addons-825243" [27b791c8-efee-40e1-8039-9993e903c434] Running
	I0819 16:54:02.739363   18587 system_pods.go:89] "kube-ingress-dns-minikube" [4fb6f11a-8a8e-4d67-b27d-c9ec6d0b369c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0819 16:54:02.739369   18587 system_pods.go:89] "kube-proxy-dmfp2" [f676c55d-f283-4321-9815-02303a82a9c9] Running
	I0819 16:54:02.739378   18587 system_pods.go:89] "kube-scheduler-addons-825243" [bc4ff467-bf0c-4e8d-aae2-8e2363388539] Running
	I0819 16:54:02.739387   18587 system_pods.go:89] "metrics-server-8988944d9-j2w2h" [ba217649-2efe-4c98-8076-d73d63794bd7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 16:54:02.739392   18587 system_pods.go:89] "nvidia-device-plugin-daemonset-vcml2" [8b9d9981-f3de-4307-9e9f-2ee8621a11c8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0819 16:54:02.739405   18587 system_pods.go:89] "registry-6fb4cdfc84-4g2dz" [eda791b5-556d-4ac5-b370-ea875a1d634a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0819 16:54:02.739411   18587 system_pods.go:89] "registry-proxy-s2gcq" [59c4a419-cfc5-4b2f-964c-8a0b25b0d01c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 16:54:02.739415   18587 system_pods.go:89] "snapshot-controller-56fcc65765-th5xc" [643f4b21-177b-46f0-8d81-5a2fa7141613] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 16:54:02.739421   18587 system_pods.go:89] "snapshot-controller-56fcc65765-w9w56" [b0a5580b-10bf-4aa1-93f5-30ffb08f129e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 16:54:02.739424   18587 system_pods.go:89] "storage-provisioner" [31d6dc33-8567-4b1a-8db4-36f09be7e471] Running
	I0819 16:54:02.739431   18587 system_pods.go:89] "tiller-deploy-b48cc5f79-wr8hg" [f1ed9b9d-e3d1-4e09-b94f-f29a67830f09] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0819 16:54:02.739440   18587 system_pods.go:126] duration metric: took 11.096419ms to wait for k8s-apps to be running ...
	I0819 16:54:02.739446   18587 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 16:54:02.739492   18587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 16:54:02.888230   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:03.008723   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:03.008808   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:03.335325   18587 system_svc.go:56] duration metric: took 595.862944ms WaitForService to wait for kubelet
	I0819 16:54:03.335360   18587 kubeadm.go:582] duration metric: took 11.087754239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 16:54:03.335386   18587 node_conditions.go:102] verifying NodePressure condition ...
	I0819 16:54:03.337969   18587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.729764964s)
	I0819 16:54:03.338013   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:03.338030   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:03.338294   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:03.338313   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:03.338344   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:03.338363   18587 main.go:141] libmachine: Making call to close driver server
	I0819 16:54:03.338371   18587 main.go:141] libmachine: (addons-825243) Calling .Close
	I0819 16:54:03.338619   18587 main.go:141] libmachine: Successfully made call to close driver server
	I0819 16:54:03.338626   18587 main.go:141] libmachine: (addons-825243) DBG | Closing plugin on server side
	I0819 16:54:03.338635   18587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 16:54:03.340206   18587 addons.go:475] Verifying addon gcp-auth=true in "addons-825243"
	I0819 16:54:03.342722   18587 out.go:177] * Verifying gcp-auth addon...
	I0819 16:54:03.344887   18587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 16:54:03.350473   18587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 16:54:03.350494   18587 node_conditions.go:123] node cpu capacity is 2
	I0819 16:54:03.350505   18587 node_conditions.go:105] duration metric: took 15.114103ms to run NodePressure ...
	I0819 16:54:03.350517   18587 start.go:241] waiting for startup goroutines ...
	I0819 16:54:03.365339   18587 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 16:54:03.365362   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:03.413902   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:03.478299   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:03.482404   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:03.849299   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:03.861961   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:03.978959   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:03.979904   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:04.349073   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:04.362259   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:04.478646   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:04.480578   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:04.848736   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:04.862578   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:04.976961   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:04.977413   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:05.348296   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:05.363643   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:05.477150   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:05.477249   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:05.848928   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:05.861712   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:05.976678   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:05.976864   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:06.349245   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:06.361613   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:06.477924   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:06.479113   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:06.852404   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:06.861997   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:06.976850   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:06.977777   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:07.430169   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:07.432784   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:07.530761   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:07.531136   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:07.848721   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:07.862284   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:07.977281   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:07.977583   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:08.348682   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:08.362411   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:08.477531   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:08.477608   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:08.848287   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:08.861901   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:08.977231   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:08.977517   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:09.349372   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:09.362911   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:09.526235   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:09.526612   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:09.851692   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:09.861357   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:09.977006   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:09.977010   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:10.348891   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:10.361288   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:10.476476   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:10.477217   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:10.848287   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:10.862420   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:10.976092   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:10.976533   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:11.349017   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:11.361213   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:11.476154   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:11.477390   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:11.849657   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:11.862417   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:11.977148   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:11.977819   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:12.348865   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:12.361084   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:12.477694   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:12.478059   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:12.848462   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:12.861660   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:12.977556   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:12.978196   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:13.348863   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:13.361205   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:13.476099   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:13.476783   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:13.848821   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:13.862369   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:13.977851   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:13.978006   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:14.349555   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:14.368647   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:14.476995   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:14.477034   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:14.848764   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:14.860957   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:14.976918   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:14.977447   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:15.347923   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:15.361401   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:15.477183   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:15.477410   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:15.848434   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:15.862315   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:15.976612   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:15.977522   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:16.348955   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:16.361470   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:16.477481   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:16.478306   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:16.848764   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:16.861827   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:16.977874   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:16.978252   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:17.350055   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:17.361217   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:17.475993   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:17.477707   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:17.850494   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:17.863325   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:17.977644   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:17.978279   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:18.348403   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:18.362280   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:18.476881   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:18.477345   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:18.848185   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:18.861841   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:18.978312   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:18.979010   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:19.348269   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:19.361892   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:19.476994   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:19.477325   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:19.848378   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:19.862206   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:19.976101   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:19.976728   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:20.351889   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:20.632575   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:20.632951   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:20.634241   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:20.849009   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:20.861837   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:20.976633   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:20.976659   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:21.348672   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:21.361526   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:21.476657   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:21.477842   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:21.849336   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:21.861778   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:21.976973   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:21.977507   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:22.348780   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:22.362192   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:22.476364   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:22.477486   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:22.848199   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:22.861198   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:22.976628   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:22.977088   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:23.349426   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:23.362095   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:23.476869   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:23.477941   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:23.849406   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:23.861851   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:23.978315   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:23.979006   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:24.348541   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:24.362533   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:24.477090   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:24.477222   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:24.849086   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:24.861321   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:24.977205   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:24.977958   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:25.348531   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:25.361894   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:25.477624   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:25.478207   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:25.887516   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:25.888243   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:25.986822   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:25.987216   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:26.349513   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:26.361909   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:26.477759   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:26.477882   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:26.849509   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:26.864663   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:26.979105   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 16:54:26.979236   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:27.348307   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:27.361447   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:27.478153   18587 kapi.go:107] duration metric: took 27.004865454s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 16:54:27.478342   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:27.848340   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:27.862259   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:27.976892   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:28.348659   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:28.361236   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:28.488928   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:28.848674   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:28.861722   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:28.977762   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:29.349185   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:29.362202   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:29.476654   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:29.848919   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:29.862623   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:30.098577   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:30.348040   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:30.361955   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:30.477142   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:30.849329   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:30.861881   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:30.977100   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:31.349162   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:31.361505   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:31.483453   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:31.848071   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:31.861910   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:31.977298   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:32.348304   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:32.361256   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:32.476121   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:32.849713   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:32.861851   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:32.976857   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:33.348312   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:33.362033   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:33.477362   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:33.848996   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:33.861733   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:33.976438   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:34.349075   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:34.361576   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:34.476401   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:34.850733   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:34.862562   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:34.978663   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:35.536950   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:35.537948   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:35.538637   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:35.848061   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:35.861910   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:35.976231   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:36.349232   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:36.361270   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:36.476130   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:36.850536   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:36.862411   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:36.976525   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:37.349006   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:37.362513   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:37.476671   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:37.848395   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:37.861530   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:37.976058   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:38.348848   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:38.360918   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:38.476700   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:38.848673   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:38.861139   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:38.975930   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:39.348621   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:39.360915   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:39.480310   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:39.849412   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:39.861293   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:39.976951   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:40.349149   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:40.362390   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:40.476625   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:40.848499   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:40.862169   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:40.976009   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:41.349511   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:41.362034   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:41.477494   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:41.849101   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:41.864934   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:41.976912   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:42.349241   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:42.361420   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:42.476183   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:42.848038   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:42.861251   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:42.976326   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:43.349375   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:43.362479   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:43.478153   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:43.848937   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:43.862105   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:43.975858   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:44.349057   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:44.361694   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:44.476332   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:44.851009   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:44.861711   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:44.977105   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:45.349029   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:45.361320   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:45.476267   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:45.848968   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:45.861514   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:45.976778   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:46.348066   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:46.361197   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:46.475899   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:46.848665   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:46.861521   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:46.976801   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:47.348985   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:47.361812   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:47.476497   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:47.848078   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:47.861607   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:47.976240   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:48.549844   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:48.550127   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:48.550286   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:48.849134   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:48.861674   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:48.976098   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:49.349182   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:49.364606   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:49.476143   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:49.848868   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:49.861534   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:49.976275   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:50.348082   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:50.361482   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:50.476630   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:50.849336   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:50.862490   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:50.976773   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:51.348135   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:51.361888   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:51.476436   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:51.847956   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:51.862100   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:51.977030   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:52.349451   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:52.362751   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:52.476721   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:52.848883   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:52.862754   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:52.976909   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:53.349036   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:53.361229   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:53.477172   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:53.848257   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:53.862038   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:53.976148   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:54.354039   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:54.361935   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:54.477395   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:54.849292   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:54.861781   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:54.976713   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:55.348229   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:55.362045   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:55.477017   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:55.848361   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:55.862482   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:55.977603   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:56.348736   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:56.363337   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:56.476931   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:56.849054   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:56.861649   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:56.976203   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:57.350018   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:57.361693   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:57.490201   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:57.849760   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:57.861096   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:57.976468   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:58.348542   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:58.363216   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:58.476189   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:58.848848   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:58.861195   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:58.976649   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:59.353565   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:59.362195   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:59.477547   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:54:59.849213   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:54:59.861622   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:54:59.978416   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:00.354004   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:00.369335   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:00.481924   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:00.849902   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:00.952433   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:00.978889   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:01.353524   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:01.364473   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:01.478847   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:01.849483   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:01.862954   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:01.977387   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:02.348096   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:02.361469   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:02.477058   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:02.849284   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:02.862115   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:02.976305   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:03.348884   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:03.361965   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:03.477810   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:03.848375   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:03.861703   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:03.976859   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:04.348245   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:04.361918   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:04.477922   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:04.849254   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:04.862091   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:04.976121   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:05.349366   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:05.362017   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:05.482590   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:05.849334   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:05.861679   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:05.976722   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:06.349394   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:06.362430   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:06.477037   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:06.851891   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:06.862727   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:06.976587   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:07.348866   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:07.362607   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:07.477905   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:07.849486   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:07.863200   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:07.976860   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:08.355353   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:08.361526   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:08.484384   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:08.849145   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:08.862251   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:08.976041   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:09.348483   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:09.361780   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:09.477297   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:09.849049   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:09.861526   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:09.976830   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:10.348474   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:10.362177   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:10.477687   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:10.848970   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:10.861488   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:10.976829   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:11.395330   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:11.395521   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:11.476890   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:11.848235   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:11.861672   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:11.976609   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:12.348719   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:12.361837   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:12.475757   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:12.849595   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:12.861591   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:12.976957   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:13.348641   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:13.362693   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:13.476157   18587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 16:55:13.848352   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:13.862236   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:13.977026   18587 kapi.go:107] duration metric: took 1m13.504739338s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 16:55:14.690676   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:14.693174   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:14.851002   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:14.865074   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:15.349109   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:15.362383   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:15.849324   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:15.862349   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:16.349245   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:16.361899   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:16.848912   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:16.863741   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:17.349364   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:17.361990   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:17.847894   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:17.861817   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:18.353484   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 16:55:18.453824   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:18.849305   18587 kapi.go:107] duration metric: took 1m15.504411683s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 16:55:18.851390   18587 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-825243 cluster.
	I0819 16:55:18.852862   18587 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 16:55:18.854336   18587 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 16:55:18.861986   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:19.362152   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:19.861867   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:20.361993   18587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 16:55:20.862951   18587 kapi.go:107] duration metric: took 1m19.505661731s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 16:55:20.865036   18587 out.go:177] * Enabled addons: storage-provisioner, metrics-server, helm-tiller, nvidia-device-plugin, ingress-dns, inspektor-gadget, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0819 16:55:20.866365   18587 addons.go:510] duration metric: took 1m28.618714412s for enable addons: enabled=[storage-provisioner metrics-server helm-tiller nvidia-device-plugin ingress-dns inspektor-gadget cloud-spanner yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0819 16:55:20.866418   18587 start.go:246] waiting for cluster config update ...
	I0819 16:55:20.866447   18587 start.go:255] writing updated cluster config ...
	I0819 16:55:20.866708   18587 ssh_runner.go:195] Run: rm -f paused
	I0819 16:55:20.920180   18587 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 16:55:20.921946   18587 out.go:177] * Done! kubectl is now configured to use "addons-825243" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.312937305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086920312905524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abf5bc61-dda9-431b-89ec-e29947f80f09 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.313368779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ff865e4-ccbe-4358-9d52-3c4c47d5e020 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.313466343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ff865e4-ccbe-4358-9d52-3c4c47d5e020 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.313871139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d6a6fb58a0bf0b43a92e5322399d1a19dafeb6e498f0c9e58661c6de12af12,PodSandboxId:033facd3d0d8cb5888ed94fa611c0364fe61163dedfa63767240ff925b85dda8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724086729034496506,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pxx9b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b732b6c9-1421-444d-a5a9-4833f92b61cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fbd6bf19018638b169116055c3c23fc79ddd75fd2b524aa21a841135777b49,PodSandboxId:f12ced1940e7b82a3bfb6e748ae9b47d927c7f8a85ba83c0b2491bdefcc1d762,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724086587228393318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 150e23cd-36ab-477d-80fd-445d04acef1c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d095ab106d7f728e34f06e9c289ca333169bd3e475c7b926b60ad2398cbd8ce9,PodSandboxId:5ce648696b6e9b360aed273b465e8e20a2de124de31690617b21358659442ec3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724086524469145582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1f2e243-f50e-4c15-9
af8-eb7e16ad81ee,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3644cc9fe9230b2a6fc4e2aa138fa0d229cad155d5925e4cfae0e3f7eb9cdb,PodSandboxId:5e06970da9213dd3258247eab954425a912f79b942566f31b5b209ce42b67abd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724086484813539851,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jfc4v,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: d40a9d0c-12eb-4055-8b46-06fd1543bc68,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fffb929a00d1dcb6b5451c692f58ce39023f31c2c6ea1016aefca4355f4dc,PodSandboxId:e4a4fd6a630213bbf44eaec03fa7ef0cf4b9ec261e452487f662c68840953626,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724086479475272303,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-j2w2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba217649-2efe-4c98-8076-d73d63794bd7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2450e2dc00595f2c4df9fdc3ac7142bf1da96bfe8897e1e713f62ad78811ce,PodSandboxId:d0103098f1809fed076ff6a12b7397da39430b740ffad687467f083452004442,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724086438642165959,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d6dc33-8567-4b1a-8db4-36f09be7e471,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72decfaa40679a399c79e093e6c70273882962ba10ed91b35a807e0c62b4bed,PodSandboxId:3b802b9a05eb2f0a3f9cd8e16b4b5b0e0494f4200f6c5126daf32dde49857daf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724086435807157726,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g248k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dc0c-d315-406d-82d5-c89c95dcd0f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93ec25eebd6003b585e9fd7a83f22315b4628b439ea0750f1748f6f854225c9,PodSandboxId:2092262a8f5e0b214d134ca325cce531ff6a34aa9cb77f4b47134c5b2fd3d068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724086433626604073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmfp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f676c55d-f283-4321-9815-02303a82a9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59baf8452639b8bfefbbb03c5b991b5e1e2045846e7cf24bf6295732f0e5c7ad,PodSandboxId:56a7a29e8b717b4cf95c6af87b42cdb3c11ce50aa1c232842b5c652d25daddb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724086422047914813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02d74c94d5fa76bc339c924874ff82c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4daf922ea6fc225852bbf19f290e3bcfc08e9c0f44712edb28e0dc60762d501,PodSandboxId:e5b1c9dad8266c5208a6793f40161ee2e2351a132678bee169f48a6eb0ff8ab2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724086422050220924,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c73413c40ce740b1be5c2b9eb143a86,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b65e02148e4a4beb939468541528052b6c74af3ed0680117121fbb8303f48,PodSandboxId:57ad90b76c83c6455b8be897d522327afa6581aee2d00e1a6abb75507286ce39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724086422012572597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b841347cb105acbb91d9c9b4c5a951,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58ad92a674ccd49ed1ac6c6762b72efdc7e1a134037596a9bb9ab6eb77a2c81,PodSandboxId:cd96040973c29ee9450ed700b01f466721880994edeb0d341c5a1be66c116404,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724086421976516809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b021647473e121728e60e652f49fb2bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ff865e4-ccbe-4358-9d52-3c4c47d5e020 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.348669746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a3b68e4-b4cf-4dc4-a6ae-a2ee3a5caa1e name=/runtime.v1.RuntimeService/Version
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.348757132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a3b68e4-b4cf-4dc4-a6ae-a2ee3a5caa1e name=/runtime.v1.RuntimeService/Version
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.350051724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=831b8b0f-a5f7-46f7-be9a-b738ad6e65e9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.351298298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086920351261224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=831b8b0f-a5f7-46f7-be9a-b738ad6e65e9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.351907937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39bdf5fd-83e9-4021-ab41-da7d4bc153b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.351975268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39bdf5fd-83e9-4021-ab41-da7d4bc153b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.352231540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d6a6fb58a0bf0b43a92e5322399d1a19dafeb6e498f0c9e58661c6de12af12,PodSandboxId:033facd3d0d8cb5888ed94fa611c0364fe61163dedfa63767240ff925b85dda8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724086729034496506,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pxx9b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b732b6c9-1421-444d-a5a9-4833f92b61cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fbd6bf19018638b169116055c3c23fc79ddd75fd2b524aa21a841135777b49,PodSandboxId:f12ced1940e7b82a3bfb6e748ae9b47d927c7f8a85ba83c0b2491bdefcc1d762,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724086587228393318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 150e23cd-36ab-477d-80fd-445d04acef1c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d095ab106d7f728e34f06e9c289ca333169bd3e475c7b926b60ad2398cbd8ce9,PodSandboxId:5ce648696b6e9b360aed273b465e8e20a2de124de31690617b21358659442ec3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724086524469145582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1f2e243-f50e-4c15-9
af8-eb7e16ad81ee,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3644cc9fe9230b2a6fc4e2aa138fa0d229cad155d5925e4cfae0e3f7eb9cdb,PodSandboxId:5e06970da9213dd3258247eab954425a912f79b942566f31b5b209ce42b67abd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724086484813539851,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jfc4v,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: d40a9d0c-12eb-4055-8b46-06fd1543bc68,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fffb929a00d1dcb6b5451c692f58ce39023f31c2c6ea1016aefca4355f4dc,PodSandboxId:e4a4fd6a630213bbf44eaec03fa7ef0cf4b9ec261e452487f662c68840953626,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724086479475272303,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-j2w2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba217649-2efe-4c98-8076-d73d63794bd7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2450e2dc00595f2c4df9fdc3ac7142bf1da96bfe8897e1e713f62ad78811ce,PodSandboxId:d0103098f1809fed076ff6a12b7397da39430b740ffad687467f083452004442,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724086438642165959,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d6dc33-8567-4b1a-8db4-36f09be7e471,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72decfaa40679a399c79e093e6c70273882962ba10ed91b35a807e0c62b4bed,PodSandboxId:3b802b9a05eb2f0a3f9cd8e16b4b5b0e0494f4200f6c5126daf32dde49857daf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724086435807157726,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g248k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dc0c-d315-406d-82d5-c89c95dcd0f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93ec25eebd6003b585e9fd7a83f22315b4628b439ea0750f1748f6f854225c9,PodSandboxId:2092262a8f5e0b214d134ca325cce531ff6a34aa9cb77f4b47134c5b2fd3d068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724086433626604073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmfp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f676c55d-f283-4321-9815-02303a82a9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59baf8452639b8bfefbbb03c5b991b5e1e2045846e7cf24bf6295732f0e5c7ad,PodSandboxId:56a7a29e8b717b4cf95c6af87b42cdb3c11ce50aa1c232842b5c652d25daddb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724086422047914813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02d74c94d5fa76bc339c924874ff82c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4daf922ea6fc225852bbf19f290e3bcfc08e9c0f44712edb28e0dc60762d501,PodSandboxId:e5b1c9dad8266c5208a6793f40161ee2e2351a132678bee169f48a6eb0ff8ab2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724086422050220924,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c73413c40ce740b1be5c2b9eb143a86,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b65e02148e4a4beb939468541528052b6c74af3ed0680117121fbb8303f48,PodSandboxId:57ad90b76c83c6455b8be897d522327afa6581aee2d00e1a6abb75507286ce39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724086422012572597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b841347cb105acbb91d9c9b4c5a951,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58ad92a674ccd49ed1ac6c6762b72efdc7e1a134037596a9bb9ab6eb77a2c81,PodSandboxId:cd96040973c29ee9450ed700b01f466721880994edeb0d341c5a1be66c116404,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724086421976516809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b021647473e121728e60e652f49fb2bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39bdf5fd-83e9-4021-ab41-da7d4bc153b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.386953204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7da22297-05a8-4fa1-b1d5-a63752722608 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.387039593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7da22297-05a8-4fa1-b1d5-a63752722608 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.387990921Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b41274e-adac-4363-902b-fe8c48763116 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.389224332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086920389194938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b41274e-adac-4363-902b-fe8c48763116 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.389694787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=222af534-c06e-465c-88b9-fedaca033924 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.389760287Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=222af534-c06e-465c-88b9-fedaca033924 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.390055819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d6a6fb58a0bf0b43a92e5322399d1a19dafeb6e498f0c9e58661c6de12af12,PodSandboxId:033facd3d0d8cb5888ed94fa611c0364fe61163dedfa63767240ff925b85dda8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724086729034496506,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pxx9b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b732b6c9-1421-444d-a5a9-4833f92b61cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fbd6bf19018638b169116055c3c23fc79ddd75fd2b524aa21a841135777b49,PodSandboxId:f12ced1940e7b82a3bfb6e748ae9b47d927c7f8a85ba83c0b2491bdefcc1d762,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724086587228393318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 150e23cd-36ab-477d-80fd-445d04acef1c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d095ab106d7f728e34f06e9c289ca333169bd3e475c7b926b60ad2398cbd8ce9,PodSandboxId:5ce648696b6e9b360aed273b465e8e20a2de124de31690617b21358659442ec3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724086524469145582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1f2e243-f50e-4c15-9
af8-eb7e16ad81ee,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3644cc9fe9230b2a6fc4e2aa138fa0d229cad155d5925e4cfae0e3f7eb9cdb,PodSandboxId:5e06970da9213dd3258247eab954425a912f79b942566f31b5b209ce42b67abd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724086484813539851,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jfc4v,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: d40a9d0c-12eb-4055-8b46-06fd1543bc68,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fffb929a00d1dcb6b5451c692f58ce39023f31c2c6ea1016aefca4355f4dc,PodSandboxId:e4a4fd6a630213bbf44eaec03fa7ef0cf4b9ec261e452487f662c68840953626,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724086479475272303,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-j2w2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba217649-2efe-4c98-8076-d73d63794bd7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2450e2dc00595f2c4df9fdc3ac7142bf1da96bfe8897e1e713f62ad78811ce,PodSandboxId:d0103098f1809fed076ff6a12b7397da39430b740ffad687467f083452004442,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724086438642165959,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d6dc33-8567-4b1a-8db4-36f09be7e471,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72decfaa40679a399c79e093e6c70273882962ba10ed91b35a807e0c62b4bed,PodSandboxId:3b802b9a05eb2f0a3f9cd8e16b4b5b0e0494f4200f6c5126daf32dde49857daf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724086435807157726,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g248k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dc0c-d315-406d-82d5-c89c95dcd0f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93ec25eebd6003b585e9fd7a83f22315b4628b439ea0750f1748f6f854225c9,PodSandboxId:2092262a8f5e0b214d134ca325cce531ff6a34aa9cb77f4b47134c5b2fd3d068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724086433626604073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmfp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f676c55d-f283-4321-9815-02303a82a9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59baf8452639b8bfefbbb03c5b991b5e1e2045846e7cf24bf6295732f0e5c7ad,PodSandboxId:56a7a29e8b717b4cf95c6af87b42cdb3c11ce50aa1c232842b5c652d25daddb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724086422047914813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02d74c94d5fa76bc339c924874ff82c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4daf922ea6fc225852bbf19f290e3bcfc08e9c0f44712edb28e0dc60762d501,PodSandboxId:e5b1c9dad8266c5208a6793f40161ee2e2351a132678bee169f48a6eb0ff8ab2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724086422050220924,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c73413c40ce740b1be5c2b9eb143a86,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b65e02148e4a4beb939468541528052b6c74af3ed0680117121fbb8303f48,PodSandboxId:57ad90b76c83c6455b8be897d522327afa6581aee2d00e1a6abb75507286ce39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724086422012572597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b841347cb105acbb91d9c9b4c5a951,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58ad92a674ccd49ed1ac6c6762b72efdc7e1a134037596a9bb9ab6eb77a2c81,PodSandboxId:cd96040973c29ee9450ed700b01f466721880994edeb0d341c5a1be66c116404,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724086421976516809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b021647473e121728e60e652f49fb2bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=222af534-c06e-465c-88b9-fedaca033924 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.430085006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=844e3ab6-f902-49e0-aeb4-1c29922ed03f name=/runtime.v1.RuntimeService/Version
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.430163315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=844e3ab6-f902-49e0-aeb4-1c29922ed03f name=/runtime.v1.RuntimeService/Version
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.431068550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56324790-1927-4938-8884-1b6cd53790d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.432271974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086920432244197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56324790-1927-4938-8884-1b6cd53790d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.432849623Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7ba38d1-f5e8-492c-bc1d-d1ae812cd65c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.432902398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7ba38d1-f5e8-492c-bc1d-d1ae812cd65c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:02:00 addons-825243 crio[678]: time="2024-08-19 17:02:00.433152927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d6a6fb58a0bf0b43a92e5322399d1a19dafeb6e498f0c9e58661c6de12af12,PodSandboxId:033facd3d0d8cb5888ed94fa611c0364fe61163dedfa63767240ff925b85dda8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724086729034496506,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-pxx9b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b732b6c9-1421-444d-a5a9-4833f92b61cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fbd6bf19018638b169116055c3c23fc79ddd75fd2b524aa21a841135777b49,PodSandboxId:f12ced1940e7b82a3bfb6e748ae9b47d927c7f8a85ba83c0b2491bdefcc1d762,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724086587228393318,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 150e23cd-36ab-477d-80fd-445d04acef1c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d095ab106d7f728e34f06e9c289ca333169bd3e475c7b926b60ad2398cbd8ce9,PodSandboxId:5ce648696b6e9b360aed273b465e8e20a2de124de31690617b21358659442ec3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724086524469145582,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1f2e243-f50e-4c15-9
af8-eb7e16ad81ee,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3644cc9fe9230b2a6fc4e2aa138fa0d229cad155d5925e4cfae0e3f7eb9cdb,PodSandboxId:5e06970da9213dd3258247eab954425a912f79b942566f31b5b209ce42b67abd,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724086484813539851,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-jfc4v,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: d40a9d0c-12eb-4055-8b46-06fd1543bc68,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f69fffb929a00d1dcb6b5451c692f58ce39023f31c2c6ea1016aefca4355f4dc,PodSandboxId:e4a4fd6a630213bbf44eaec03fa7ef0cf4b9ec261e452487f662c68840953626,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724086479475272303,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-j2w2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba217649-2efe-4c98-8076-d73d63794bd7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2450e2dc00595f2c4df9fdc3ac7142bf1da96bfe8897e1e713f62ad78811ce,PodSandboxId:d0103098f1809fed076ff6a12b7397da39430b740ffad687467f083452004442,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724086438642165959,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d6dc33-8567-4b1a-8db4-36f09be7e471,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72decfaa40679a399c79e093e6c70273882962ba10ed91b35a807e0c62b4bed,PodSandboxId:3b802b9a05eb2f0a3f9cd8e16b4b5b0e0494f4200f6c5126daf32dde49857daf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724086435807157726,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-g248k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b8dc0c-d315-406d-82d5-c89c95dcd0f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a93ec25eebd6003b585e9fd7a83f22315b4628b439ea0750f1748f6f854225c9,PodSandboxId:2092262a8f5e0b214d134ca325cce531ff6a34aa9cb77f4b47134c5b2fd3d068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724086433626604073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmfp2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f676c55d-f283-4321-9815-02303a82a9c9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59baf8452639b8bfefbbb03c5b991b5e1e2045846e7cf24bf6295732f0e5c7ad,PodSandboxId:56a7a29e8b717b4cf95c6af87b42cdb3c11ce50aa1c232842b5c652d25daddb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724086422047914813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02d74c94d5fa76bc339c924874ff82c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4daf922ea6fc225852bbf19f290e3bcfc08e9c0f44712edb28e0dc60762d501,PodSandboxId:e5b1c9dad8266c5208a6793f40161ee2e2351a132678bee169f48a6eb0ff8ab2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724086422050220924,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c73413c40ce740b1be5c2b9eb143a86,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b65e02148e4a4beb939468541528052b6c74af3ed0680117121fbb8303f48,PodSandboxId:57ad90b76c83c6455b8be897d522327afa6581aee2d00e1a6abb75507286ce39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724086422012572597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79b841347cb105acbb91d9c9b4c5a951,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58ad92a674ccd49ed1ac6c6762b72efdc7e1a134037596a9bb9ab6eb77a2c81,PodSandboxId:cd96040973c29ee9450ed700b01f466721880994edeb0d341c5a1be66c116404,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724086421976516809,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b021647473e121728e60e652f49fb2bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7ba38d1-f5e8-492c-bc1d-d1ae812cd65c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	92d6a6fb58a0b       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   033facd3d0d8c       hello-world-app-55bf9c44b4-pxx9b
	75fbd6bf19018       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         5 minutes ago       Running             nginx                     0                   f12ced1940e7b       nginx
	d095ab106d7f7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   5ce648696b6e9       busybox
	7e3644cc9fe92       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   5e06970da9213       local-path-provisioner-86d989889c-jfc4v
	f69fffb929a00       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   e4a4fd6a63021       metrics-server-8988944d9-j2w2h
	6c2450e2dc005       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   d0103098f1809       storage-provisioner
	d72decfaa4067       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   3b802b9a05eb2       coredns-6f6b679f8f-g248k
	a93ec25eebd60       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        8 minutes ago       Running             kube-proxy                0                   2092262a8f5e0       kube-proxy-dmfp2
	b4daf922ea6fc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   e5b1c9dad8266       etcd-addons-825243
	59baf8452639b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        8 minutes ago       Running             kube-scheduler            0                   56a7a29e8b717       kube-scheduler-addons-825243
	0e6b65e02148e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        8 minutes ago       Running             kube-controller-manager   0                   57ad90b76c83c       kube-controller-manager-addons-825243
	d58ad92a674cc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        8 minutes ago       Running             kube-apiserver            0                   cd96040973c29       kube-apiserver-addons-825243
	
	
	==> coredns [d72decfaa40679a399c79e093e6c70273882962ba10ed91b35a807e0c62b4bed] <==
	[INFO] 10.244.0.7:59064 - 57053 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000252625s
	[INFO] 10.244.0.7:57627 - 28620 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107087s
	[INFO] 10.244.0.7:57627 - 64718 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122167s
	[INFO] 10.244.0.7:40848 - 34389 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000151905s
	[INFO] 10.244.0.7:40848 - 39767 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111358s
	[INFO] 10.244.0.7:39316 - 45051 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164496s
	[INFO] 10.244.0.7:39316 - 50685 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127743s
	[INFO] 10.244.0.7:43887 - 3053 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000202588s
	[INFO] 10.244.0.7:43887 - 30184 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000268326s
	[INFO] 10.244.0.7:55844 - 39835 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074609s
	[INFO] 10.244.0.7:55844 - 58013 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000127189s
	[INFO] 10.244.0.7:42607 - 2875 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069037s
	[INFO] 10.244.0.7:42607 - 63545 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085819s
	[INFO] 10.244.0.7:39438 - 41557 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000073605s
	[INFO] 10.244.0.7:39438 - 9558 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080925s
	[INFO] 10.244.0.22:37660 - 8634 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00040356s
	[INFO] 10.244.0.22:60149 - 48801 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000092823s
	[INFO] 10.244.0.22:51326 - 38513 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151422s
	[INFO] 10.244.0.22:37486 - 14650 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000078711s
	[INFO] 10.244.0.22:37747 - 34950 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076232s
	[INFO] 10.244.0.22:54355 - 16126 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101489s
	[INFO] 10.244.0.22:33461 - 33440 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000618964s
	[INFO] 10.244.0.22:37377 - 29868 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000542874s
	[INFO] 10.244.0.26:38718 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000605525s
	[INFO] 10.244.0.26:39649 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000169829s
	
	
	==> describe nodes <==
	Name:               addons-825243
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-825243
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=addons-825243
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T16_53_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-825243
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 16:53:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-825243
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:01:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 16:59:25 +0000   Mon, 19 Aug 2024 16:53:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 16:59:25 +0000   Mon, 19 Aug 2024 16:53:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 16:59:25 +0000   Mon, 19 Aug 2024 16:53:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 16:59:25 +0000   Mon, 19 Aug 2024 16:53:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    addons-825243
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1d1d3a536f146e68e13d5373a247a6a
	  System UUID:                a1d1d3a5-36f1-46e6-8e13-d5373a247a6a
	  Boot ID:                    dc6cf311-c879-4ef5-9873-ffa2a469bfc9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  default                     hello-world-app-55bf9c44b4-pxx9b           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 coredns-6f6b679f8f-g248k                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m8s
	  kube-system                 etcd-addons-825243                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m13s
	  kube-system                 kube-apiserver-addons-825243               250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-controller-manager-addons-825243      200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-proxy-dmfp2                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-scheduler-addons-825243               100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 metrics-server-8988944d9-j2w2h             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         8m3s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  local-path-storage          local-path-provisioner-86d989889c-jfc4v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m6s   kube-proxy       
	  Normal  Starting                 8m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m13s  kubelet          Node addons-825243 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m13s  kubelet          Node addons-825243 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m13s  kubelet          Node addons-825243 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m12s  kubelet          Node addons-825243 status is now: NodeReady
	  Normal  RegisteredNode           8m9s   node-controller  Node addons-825243 event: Registered Node addons-825243 in Controller
	
	
	==> dmesg <==
	[  +5.230540] kauditd_printk_skb: 131 callbacks suppressed
	[Aug19 16:54] kauditd_printk_skb: 164 callbacks suppressed
	[  +6.880953] kauditd_printk_skb: 36 callbacks suppressed
	[ +16.798661] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.123370] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.512985] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.049745] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.860120] kauditd_printk_skb: 17 callbacks suppressed
	[Aug19 16:55] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.472447] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.104652] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.462638] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.351321] kauditd_printk_skb: 52 callbacks suppressed
	[ +23.897931] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.157966] kauditd_printk_skb: 42 callbacks suppressed
	[Aug19 16:56] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.484918] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.545491] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.827641] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.643515] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.206068] kauditd_printk_skb: 13 callbacks suppressed
	[ +22.706274] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.232374] kauditd_printk_skb: 33 callbacks suppressed
	[Aug19 16:58] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.239037] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b4daf922ea6fc225852bbf19f290e3bcfc08e9c0f44712edb28e0dc60762d501] <==
	{"level":"info","ts":"2024-08-19T16:54:57.635526Z","caller":"traceutil/trace.go:171","msg":"trace[1457434730] transaction","detail":"{read_only:false; response_revision:1017; number_of_response:1; }","duration":"139.265928ms","start":"2024-08-19T16:54:57.496245Z","end":"2024-08-19T16:54:57.635510Z","steps":["trace[1457434730] 'process raft request'  (duration: 114.915852ms)","trace[1457434730] 'compare'  (duration: 23.998666ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T16:55:14.385723Z","caller":"traceutil/trace.go:171","msg":"trace[1684809458] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"186.147214ms","start":"2024-08-19T16:55:14.199560Z","end":"2024-08-19T16:55:14.385707Z","steps":["trace[1684809458] 'process raft request'  (duration: 186.03108ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T16:55:14.669344Z","caller":"traceutil/trace.go:171","msg":"trace[647184081] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"279.125547ms","start":"2024-08-19T16:55:14.390204Z","end":"2024-08-19T16:55:14.669330Z","steps":["trace[647184081] 'process raft request'  (duration: 279.051686ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T16:55:14.669907Z","caller":"traceutil/trace.go:171","msg":"trace[1699870266] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"337.825002ms","start":"2024-08-19T16:55:14.331972Z","end":"2024-08-19T16:55:14.669797Z","steps":["trace[1699870266] 'process raft request'  (duration: 335.221032ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:55:14.672958Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T16:55:14.331954Z","time spent":"340.926963ms","remote":"127.0.0.1:34592","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":779,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-bc57996ff-qc9mh.17ed2f8cd862f785\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-bc57996ff-qc9mh.17ed2f8cd862f785\" value_size:673 lease:6981020788548569968 >> failure:<>"}
	{"level":"info","ts":"2024-08-19T16:55:14.670227Z","caller":"traceutil/trace.go:171","msg":"trace[299941957] linearizableReadLoop","detail":"{readStateIndex:1151; appliedIndex:1150; }","duration":"335.045369ms","start":"2024-08-19T16:55:14.335173Z","end":"2024-08-19T16:55:14.670218Z","steps":["trace[299941957] 'read index received'  (duration: 50.98208ms)","trace[299941957] 'applied index is now lower than readState.Index'  (duration: 284.06229ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T16:55:14.670399Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.200065ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:55:14.673607Z","caller":"traceutil/trace.go:171","msg":"trace[1000591769] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"338.381925ms","start":"2024-08-19T16:55:14.335168Z","end":"2024-08-19T16:55:14.673550Z","steps":["trace[1000591769] 'agreement among raft nodes before linearized reading'  (duration: 335.080646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:55:14.673661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.555564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:55:14.673704Z","caller":"traceutil/trace.go:171","msg":"trace[1890172586] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"284.5965ms","start":"2024-08-19T16:55:14.389101Z","end":"2024-08-19T16:55:14.673697Z","steps":["trace[1890172586] 'agreement among raft nodes before linearized reading'  (duration: 284.549793ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:55:14.673668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T16:55:14.335136Z","time spent":"338.517164ms","remote":"127.0.0.1:34702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-19T16:55:14.673628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.208709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:55:14.676512Z","caller":"traceutil/trace.go:171","msg":"trace[1526936199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"329.091354ms","start":"2024-08-19T16:55:14.347408Z","end":"2024-08-19T16:55:14.676500Z","steps":["trace[1526936199] 'agreement among raft nodes before linearized reading'  (duration: 326.195868ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:55:14.676595Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T16:55:14.347375Z","time spent":"329.206873ms","remote":"127.0.0.1:34702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-08-19T16:55:26.353695Z","caller":"traceutil/trace.go:171","msg":"trace[1154825456] linearizableReadLoop","detail":"{readStateIndex:1228; appliedIndex:1227; }","duration":"103.801056ms","start":"2024-08-19T16:55:26.249873Z","end":"2024-08-19T16:55:26.353675Z","steps":["trace[1154825456] 'read index received'  (duration: 103.6311ms)","trace[1154825456] 'applied index is now lower than readState.Index'  (duration: 169.037µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T16:55:26.353949Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.044507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:55:26.354034Z","caller":"traceutil/trace.go:171","msg":"trace[450853085] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1192; }","duration":"104.156937ms","start":"2024-08-19T16:55:26.249867Z","end":"2024-08-19T16:55:26.354024Z","steps":["trace[450853085] 'agreement among raft nodes before linearized reading'  (duration: 104.01276ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T16:56:10.626763Z","caller":"traceutil/trace.go:171","msg":"trace[1015615996] transaction","detail":"{read_only:false; response_revision:1490; number_of_response:1; }","duration":"465.874581ms","start":"2024-08-19T16:56:10.160860Z","end":"2024-08-19T16:56:10.626734Z","steps":["trace[1015615996] 'process raft request'  (duration: 465.723442ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:56:10.627018Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T16:56:10.160847Z","time spent":"466.06939ms","remote":"127.0.0.1:34794","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1454 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-08-19T16:56:21.153997Z","caller":"traceutil/trace.go:171","msg":"trace[1300096641] transaction","detail":"{read_only:false; response_revision:1567; number_of_response:1; }","duration":"283.894989ms","start":"2024-08-19T16:56:20.870081Z","end":"2024-08-19T16:56:21.153976Z","steps":["trace[1300096641] 'process raft request'  (duration: 283.784113ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T16:56:21.154521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.698046ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:56:21.154559Z","caller":"traceutil/trace.go:171","msg":"trace[901507014] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1567; }","duration":"232.755264ms","start":"2024-08-19T16:56:20.921797Z","end":"2024-08-19T16:56:21.154552Z","steps":["trace[901507014] 'agreement among raft nodes before linearized reading'  (duration: 232.682572ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T16:56:21.154435Z","caller":"traceutil/trace.go:171","msg":"trace[1994802459] linearizableReadLoop","detail":"{readStateIndex:1623; appliedIndex:1622; }","duration":"232.553482ms","start":"2024-08-19T16:56:20.921870Z","end":"2024-08-19T16:56:21.154424Z","steps":["trace[1994802459] 'read index received'  (duration: 231.92339ms)","trace[1994802459] 'applied index is now lower than readState.Index'  (duration: 629.004µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T16:56:21.157510Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.368927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T16:56:21.157540Z","caller":"traceutil/trace.go:171","msg":"trace[2138731973] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1567; }","duration":"168.41973ms","start":"2024-08-19T16:56:20.989111Z","end":"2024-08-19T16:56:21.157531Z","steps":["trace[2138731973] 'agreement among raft nodes before linearized reading'  (duration: 165.60966ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:02:00 up 8 min,  0 users,  load average: 0.41, 0.71, 0.53
	Linux addons-825243 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d58ad92a674ccd49ed1ac6c6762b72efdc7e1a134037596a9bb9ab6eb77a2c81] <==
	E0819 16:55:48.405119       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.27.224:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.27.224:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.27.224:443: connect: connection refused" logger="UnhandledError"
	E0819 16:55:48.407178       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.27.224:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.27.224:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.27.224:443: connect: connection refused" logger="UnhandledError"
	I0819 16:55:48.460092       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0819 16:56:04.732719       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.161.50"}
	E0819 16:56:15.720734       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.129:8443->10.244.0.29:55860: read: connection reset by peer
	I0819 16:56:22.858892       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 16:56:23.041754       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.253.80"}
	I0819 16:56:25.171123       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 16:56:26.248393       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 16:56:29.050249       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0819 16:56:58.928166       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 16:56:58.928229       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 16:56:58.951102       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 16:56:58.951224       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 16:56:58.974777       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 16:56:58.975097       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 16:56:59.002268       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 16:56:59.002356       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 16:56:59.066615       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 16:56:59.066663       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 16:57:00.002767       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 16:57:00.066957       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 16:57:00.109167       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0819 16:58:46.424206       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.87.0"}
	E0819 16:58:48.703619       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [0e6b65e02148e4a4beb939468541528052b6c74af3ed0680117121fbb8303f48] <==
	W0819 16:59:56.397033       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 16:59:56.397254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:00:05.029583       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:00:05.029640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:00:13.938061       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:00:13.938257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:00:41.475167       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:00:41.475352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:00:44.062742       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:00:44.062904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:00:52.043193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:00:52.043241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:00:54.481540       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:00:54.481596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:01:22.719971       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:01:22.720032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:01:26.978215       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:01:26.978257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:01:27.830757       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:01:27.830936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 17:01:42.509967       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:01:42.510042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 17:01:59.466592       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="10.819µs"
	W0819 17:02:00.635683       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 17:02:00.635737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [a93ec25eebd6003b585e9fd7a83f22315b4628b439ea0750f1748f6f854225c9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 16:53:54.470978       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 16:53:54.481797       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	E0819 16:53:54.481892       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 16:53:54.537240       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 16:53:54.537269       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 16:53:54.537322       1 server_linux.go:169] "Using iptables Proxier"
	I0819 16:53:54.540538       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 16:53:54.540797       1 server.go:483] "Version info" version="v1.31.0"
	I0819 16:53:54.544908       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 16:53:54.551708       1 config.go:104] "Starting endpoint slice config controller"
	I0819 16:53:54.551770       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 16:53:54.551835       1 config.go:197] "Starting service config controller"
	I0819 16:53:54.551840       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 16:53:54.551893       1 config.go:326] "Starting node config controller"
	I0819 16:53:54.551919       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 16:53:54.653863       1 shared_informer.go:320] Caches are synced for service config
	I0819 16:53:54.653923       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 16:53:54.654444       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [59baf8452639b8bfefbbb03c5b991b5e1e2045846e7cf24bf6295732f0e5c7ad] <==
	W0819 16:53:44.692748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 16:53:44.694121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.502564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 16:53:45.502611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.582124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 16:53:45.582188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.634449       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 16:53:45.634604       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 16:53:45.637722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 16:53:45.637764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.647124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 16:53:45.647224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.692269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 16:53:45.692417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.747401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 16:53:45.747449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.813296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 16:53:45.813406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.901673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 16:53:45.901781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.942301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 16:53:45.942573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 16:53:45.943532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 16:53:45.943614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 16:53:47.382755       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 17:00:57 addons-825243 kubelet[1223]: E0819 17:00:57.674503    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086857674122150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:07 addons-825243 kubelet[1223]: E0819 17:01:07.677026    1223 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086867676609416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:07 addons-825243 kubelet[1223]: E0819 17:01:07.677288    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086867676609416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:17 addons-825243 kubelet[1223]: E0819 17:01:17.680776    1223 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086877680221357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:17 addons-825243 kubelet[1223]: E0819 17:01:17.681285    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086877680221357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:27 addons-825243 kubelet[1223]: E0819 17:01:27.684202    1223 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086887683764042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:27 addons-825243 kubelet[1223]: E0819 17:01:27.684609    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086887683764042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:30 addons-825243 kubelet[1223]: I0819 17:01:30.291231    1223 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 17:01:37 addons-825243 kubelet[1223]: E0819 17:01:37.687978    1223 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086897687321685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:37 addons-825243 kubelet[1223]: E0819 17:01:37.688316    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086897687321685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:47 addons-825243 kubelet[1223]: E0819 17:01:47.304951    1223 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:01:47 addons-825243 kubelet[1223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:01:47 addons-825243 kubelet[1223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:01:47 addons-825243 kubelet[1223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:01:47 addons-825243 kubelet[1223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:01:47 addons-825243 kubelet[1223]: E0819 17:01:47.690748    1223 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086907690487346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:47 addons-825243 kubelet[1223]: E0819 17:01:47.690780    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086907690487346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:57 addons-825243 kubelet[1223]: E0819 17:01:57.693617    1223 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086917693205338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:01:57 addons-825243 kubelet[1223]: E0819 17:01:57.693659    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724086917693205338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593722,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:02:00 addons-825243 kubelet[1223]: I0819 17:02:00.840551    1223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7ldb\" (UniqueName: \"kubernetes.io/projected/ba217649-2efe-4c98-8076-d73d63794bd7-kube-api-access-g7ldb\") pod \"ba217649-2efe-4c98-8076-d73d63794bd7\" (UID: \"ba217649-2efe-4c98-8076-d73d63794bd7\") "
	Aug 19 17:02:00 addons-825243 kubelet[1223]: I0819 17:02:00.840629    1223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba217649-2efe-4c98-8076-d73d63794bd7-tmp-dir\") pod \"ba217649-2efe-4c98-8076-d73d63794bd7\" (UID: \"ba217649-2efe-4c98-8076-d73d63794bd7\") "
	Aug 19 17:02:00 addons-825243 kubelet[1223]: I0819 17:02:00.841104    1223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba217649-2efe-4c98-8076-d73d63794bd7-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "ba217649-2efe-4c98-8076-d73d63794bd7" (UID: "ba217649-2efe-4c98-8076-d73d63794bd7"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 19 17:02:00 addons-825243 kubelet[1223]: I0819 17:02:00.844545    1223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba217649-2efe-4c98-8076-d73d63794bd7-kube-api-access-g7ldb" (OuterVolumeSpecName: "kube-api-access-g7ldb") pod "ba217649-2efe-4c98-8076-d73d63794bd7" (UID: "ba217649-2efe-4c98-8076-d73d63794bd7"). InnerVolumeSpecName "kube-api-access-g7ldb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 17:02:00 addons-825243 kubelet[1223]: I0819 17:02:00.941979    1223 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba217649-2efe-4c98-8076-d73d63794bd7-tmp-dir\") on node \"addons-825243\" DevicePath \"\""
	Aug 19 17:02:00 addons-825243 kubelet[1223]: I0819 17:02:00.942724    1223 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g7ldb\" (UniqueName: \"kubernetes.io/projected/ba217649-2efe-4c98-8076-d73d63794bd7-kube-api-access-g7ldb\") on node \"addons-825243\" DevicePath \"\""
	
	
	==> storage-provisioner [6c2450e2dc00595f2c4df9fdc3ac7142bf1da96bfe8897e1e713f62ad78811ce] <==
	I0819 16:53:59.069065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 16:53:59.114083       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 16:53:59.126138       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 16:53:59.201475       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 16:53:59.201636       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-825243_200d2208-1fb1-4eb9-92c3-f32d08f0589d!
	I0819 16:53:59.202585       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd5f78eb-9430-4ee8-b358-eeaf905abaa0", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-825243_200d2208-1fb1-4eb9-92c3-f32d08f0589d became leader
	I0819 16:53:59.402361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-825243_200d2208-1fb1-4eb9-92c3-f32d08f0589d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-825243 -n addons-825243
helpers_test.go:261: (dbg) Run:  kubectl --context addons-825243 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-8988944d9-j2w2h
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-825243 describe pod metrics-server-8988944d9-j2w2h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-825243 describe pod metrics-server-8988944d9-j2w2h: exit status 1 (64.900854ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8988944d9-j2w2h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-825243 describe pod metrics-server-8988944d9-j2w2h: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (357.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-825243
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-825243: exit status 82 (2m0.488388821s)

                                                
                                                
-- stdout --
	* Stopping node "addons-825243"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-825243" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-825243
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-825243: exit status 11 (21.492378544s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-825243" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-825243
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-825243: exit status 11 (6.143843649s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-825243" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-825243
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-825243: exit status 11 (6.144065592s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-825243" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 node stop m02 -v=7 --alsologtostderr
E0819 17:13:56.937419   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:14:37.899253   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:15:21.262494   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.456518356s)

                                                
                                                
-- stdout --
	* Stopping node "ha-227346-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:13:37.249574   32151 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:13:37.249724   32151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:13:37.249734   32151 out.go:358] Setting ErrFile to fd 2...
	I0819 17:13:37.249740   32151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:13:37.249937   32151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:13:37.250223   32151 mustload.go:65] Loading cluster: ha-227346
	I0819 17:13:37.250601   32151 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:13:37.250618   32151 stop.go:39] StopHost: ha-227346-m02
	I0819 17:13:37.250987   32151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:13:37.251034   32151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:13:37.266895   32151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0819 17:13:37.267343   32151 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:13:37.267885   32151 main.go:141] libmachine: Using API Version  1
	I0819 17:13:37.267911   32151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:13:37.268240   32151 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:13:37.270584   32151 out.go:177] * Stopping node "ha-227346-m02"  ...
	I0819 17:13:37.271864   32151 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 17:13:37.271885   32151 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:13:37.272094   32151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 17:13:37.272124   32151 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:13:37.274984   32151 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:13:37.275418   32151 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:13:37.275456   32151 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:13:37.275615   32151 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:13:37.275775   32151 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:13:37.275910   32151 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:13:37.276020   32151 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	I0819 17:13:37.363359   32151 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 17:13:37.414896   32151 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 17:13:37.467884   32151 main.go:141] libmachine: Stopping "ha-227346-m02"...
	I0819 17:13:37.467909   32151 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:13:37.469440   32151 main.go:141] libmachine: (ha-227346-m02) Calling .Stop
	I0819 17:13:37.472745   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 0/120
	I0819 17:13:38.474078   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 1/120
	I0819 17:13:39.475259   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 2/120
	I0819 17:13:40.476592   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 3/120
	I0819 17:13:41.478052   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 4/120
	I0819 17:13:42.480003   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 5/120
	I0819 17:13:43.481391   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 6/120
	I0819 17:13:44.483143   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 7/120
	I0819 17:13:45.485063   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 8/120
	I0819 17:13:46.486549   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 9/120
	I0819 17:13:47.488806   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 10/120
	I0819 17:13:48.490387   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 11/120
	I0819 17:13:49.491935   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 12/120
	I0819 17:13:50.493726   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 13/120
	I0819 17:13:51.496076   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 14/120
	I0819 17:13:52.497648   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 15/120
	I0819 17:13:53.499826   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 16/120
	I0819 17:13:54.501227   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 17/120
	I0819 17:13:55.503257   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 18/120
	I0819 17:13:56.504563   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 19/120
	I0819 17:13:57.506707   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 20/120
	I0819 17:13:58.507911   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 21/120
	I0819 17:13:59.509212   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 22/120
	I0819 17:14:00.510789   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 23/120
	I0819 17:14:01.512115   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 24/120
	I0819 17:14:02.514111   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 25/120
	I0819 17:14:03.515451   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 26/120
	I0819 17:14:04.516666   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 27/120
	I0819 17:14:05.518176   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 28/120
	I0819 17:14:06.519554   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 29/120
	I0819 17:14:07.521820   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 30/120
	I0819 17:14:08.523224   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 31/120
	I0819 17:14:09.524554   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 32/120
	I0819 17:14:10.525856   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 33/120
	I0819 17:14:11.527613   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 34/120
	I0819 17:14:12.529736   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 35/120
	I0819 17:14:13.531162   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 36/120
	I0819 17:14:14.532678   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 37/120
	I0819 17:14:15.534247   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 38/120
	I0819 17:14:16.535597   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 39/120
	I0819 17:14:17.537422   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 40/120
	I0819 17:14:18.538916   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 41/120
	I0819 17:14:19.540119   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 42/120
	I0819 17:14:20.541596   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 43/120
	I0819 17:14:21.543618   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 44/120
	I0819 17:14:22.545357   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 45/120
	I0819 17:14:23.546923   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 46/120
	I0819 17:14:24.548444   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 47/120
	I0819 17:14:25.549739   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 48/120
	I0819 17:14:26.551235   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 49/120
	I0819 17:14:27.553402   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 50/120
	I0819 17:14:28.555435   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 51/120
	I0819 17:14:29.556669   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 52/120
	I0819 17:14:30.558049   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 53/120
	I0819 17:14:31.559520   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 54/120
	I0819 17:14:32.560999   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 55/120
	I0819 17:14:33.563086   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 56/120
	I0819 17:14:34.564470   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 57/120
	I0819 17:14:35.565749   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 58/120
	I0819 17:14:36.567092   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 59/120
	I0819 17:14:37.568592   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 60/120
	I0819 17:14:38.569940   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 61/120
	I0819 17:14:39.571245   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 62/120
	I0819 17:14:40.572575   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 63/120
	I0819 17:14:41.574078   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 64/120
	I0819 17:14:42.575820   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 65/120
	I0819 17:14:43.577117   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 66/120
	I0819 17:14:44.579396   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 67/120
	I0819 17:14:45.580844   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 68/120
	I0819 17:14:46.582214   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 69/120
	I0819 17:14:47.584214   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 70/120
	I0819 17:14:48.585653   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 71/120
	I0819 17:14:49.587067   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 72/120
	I0819 17:14:50.588420   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 73/120
	I0819 17:14:51.589971   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 74/120
	I0819 17:14:52.592013   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 75/120
	I0819 17:14:53.593327   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 76/120
	I0819 17:14:54.595508   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 77/120
	I0819 17:14:55.596937   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 78/120
	I0819 17:14:56.599061   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 79/120
	I0819 17:14:57.601014   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 80/120
	I0819 17:14:58.603226   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 81/120
	I0819 17:14:59.604607   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 82/120
	I0819 17:15:00.605917   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 83/120
	I0819 17:15:01.607354   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 84/120
	I0819 17:15:02.609197   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 85/120
	I0819 17:15:03.611208   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 86/120
	I0819 17:15:04.612467   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 87/120
	I0819 17:15:05.613753   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 88/120
	I0819 17:15:06.615170   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 89/120
	I0819 17:15:07.617221   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 90/120
	I0819 17:15:08.619096   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 91/120
	I0819 17:15:09.621117   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 92/120
	I0819 17:15:10.622367   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 93/120
	I0819 17:15:11.623722   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 94/120
	I0819 17:15:12.625530   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 95/120
	I0819 17:15:13.627252   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 96/120
	I0819 17:15:14.628524   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 97/120
	I0819 17:15:15.629921   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 98/120
	I0819 17:15:16.631295   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 99/120
	I0819 17:15:17.633483   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 100/120
	I0819 17:15:18.634779   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 101/120
	I0819 17:15:19.636239   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 102/120
	I0819 17:15:20.637520   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 103/120
	I0819 17:15:21.638906   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 104/120
	I0819 17:15:22.640779   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 105/120
	I0819 17:15:23.642127   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 106/120
	I0819 17:15:24.643838   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 107/120
	I0819 17:15:25.646233   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 108/120
	I0819 17:15:26.647589   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 109/120
	I0819 17:15:27.648940   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 110/120
	I0819 17:15:28.650991   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 111/120
	I0819 17:15:29.652873   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 112/120
	I0819 17:15:30.654139   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 113/120
	I0819 17:15:31.655443   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 114/120
	I0819 17:15:32.657292   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 115/120
	I0819 17:15:33.659330   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 116/120
	I0819 17:15:34.660459   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 117/120
	I0819 17:15:35.661889   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 118/120
	I0819 17:15:36.663039   32151 main.go:141] libmachine: (ha-227346-m02) Waiting for machine to stop 119/120
	I0819 17:15:37.663844   32151 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 17:15:37.664043   32151 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-227346 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr: exit status 3 (19.018440065s)

                                                
                                                
-- stdout --
	ha-227346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-227346-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:15:37.707438   32581 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:15:37.707730   32581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:15:37.707740   32581 out.go:358] Setting ErrFile to fd 2...
	I0819 17:15:37.707746   32581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:15:37.707972   32581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:15:37.708194   32581 out.go:352] Setting JSON to false
	I0819 17:15:37.708227   32581 mustload.go:65] Loading cluster: ha-227346
	I0819 17:15:37.708326   32581 notify.go:220] Checking for updates...
	I0819 17:15:37.708715   32581 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:15:37.708732   32581 status.go:255] checking status of ha-227346 ...
	I0819 17:15:37.709148   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:37.709220   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:37.724285   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
	I0819 17:15:37.724717   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:37.725227   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:37.725246   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:37.725652   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:37.725845   32581 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:15:37.727335   32581 status.go:330] ha-227346 host status = "Running" (err=<nil>)
	I0819 17:15:37.727354   32581 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:15:37.727755   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:37.727794   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:37.743370   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0819 17:15:37.743716   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:37.744166   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:37.744190   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:37.744506   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:37.744698   32581 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:15:37.747522   32581 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:15:37.747998   32581 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:15:37.748030   32581 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:15:37.748143   32581 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:15:37.748447   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:37.748478   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:37.762658   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
	I0819 17:15:37.763125   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:37.763652   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:37.763665   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:37.763945   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:37.764171   32581 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:15:37.764395   32581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:15:37.764429   32581 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:15:37.767097   32581 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:15:37.767511   32581 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:15:37.767559   32581 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:15:37.767598   32581 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:15:37.767782   32581 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:15:37.767932   32581 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:15:37.768070   32581 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:15:37.854154   32581 ssh_runner.go:195] Run: systemctl --version
	I0819 17:15:37.860710   32581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:15:37.877118   32581 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:15:37.877156   32581 api_server.go:166] Checking apiserver status ...
	I0819 17:15:37.877194   32581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:15:37.895291   32581 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0819 17:15:37.910885   32581 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:15:37.910945   32581 ssh_runner.go:195] Run: ls
	I0819 17:15:37.916161   32581 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:15:37.922473   32581 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:15:37.922496   32581 status.go:422] ha-227346 apiserver status = Running (err=<nil>)
	I0819 17:15:37.922506   32581 status.go:257] ha-227346 status: &{Name:ha-227346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:15:37.922522   32581 status.go:255] checking status of ha-227346-m02 ...
	I0819 17:15:37.922817   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:37.922852   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:37.937503   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34337
	I0819 17:15:37.937898   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:37.938407   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:37.938432   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:37.938817   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:37.939042   32581 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:15:37.940530   32581 status.go:330] ha-227346-m02 host status = "Running" (err=<nil>)
	I0819 17:15:37.940546   32581 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:15:37.940982   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:37.941030   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:37.956113   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I0819 17:15:37.956466   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:37.956867   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:37.956901   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:37.957206   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:37.957440   32581 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:15:37.959899   32581 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:15:37.960322   32581 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:15:37.960346   32581 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:15:37.960480   32581 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:15:37.960799   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:37.960841   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:37.974954   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38381
	I0819 17:15:37.975326   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:37.975734   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:37.975755   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:37.976045   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:37.976200   32581 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:15:37.976398   32581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:15:37.976421   32581 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:15:37.979014   32581 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:15:37.979413   32581 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:15:37.979432   32581 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:15:37.979557   32581 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:15:37.979731   32581 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:15:37.979888   32581 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:15:37.980039   32581 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	W0819 17:15:56.333018   32581 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.189:22: connect: no route to host
	W0819 17:15:56.333107   32581 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	E0819 17:15:56.333122   32581 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:15:56.333130   32581 status.go:257] ha-227346-m02 status: &{Name:ha-227346-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 17:15:56.333162   32581 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:15:56.333177   32581 status.go:255] checking status of ha-227346-m03 ...
	I0819 17:15:56.333480   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:56.333519   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:56.348252   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I0819 17:15:56.348721   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:56.349233   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:56.349259   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:56.349531   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:56.349702   32581 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:15:56.351375   32581 status.go:330] ha-227346-m03 host status = "Running" (err=<nil>)
	I0819 17:15:56.351389   32581 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:15:56.351740   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:56.351778   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:56.366690   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
	I0819 17:15:56.367099   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:56.367633   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:56.367652   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:56.367941   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:56.368129   32581 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:15:56.370892   32581 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:15:56.371372   32581 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:15:56.371402   32581 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:15:56.371535   32581 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:15:56.371884   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:56.371921   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:56.387033   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I0819 17:15:56.387453   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:56.387904   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:56.387927   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:56.388270   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:56.388483   32581 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:15:56.388723   32581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:15:56.388767   32581 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:15:56.391775   32581 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:15:56.392165   32581 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:15:56.392189   32581 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:15:56.392309   32581 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:15:56.392481   32581 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:15:56.392601   32581 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:15:56.392740   32581 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:15:56.473389   32581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:15:56.491979   32581 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:15:56.492012   32581 api_server.go:166] Checking apiserver status ...
	I0819 17:15:56.492065   32581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:15:56.506934   32581 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	W0819 17:15:56.515933   32581 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:15:56.515983   32581 ssh_runner.go:195] Run: ls
	I0819 17:15:56.520614   32581 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:15:56.526881   32581 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:15:56.526904   32581 status.go:422] ha-227346-m03 apiserver status = Running (err=<nil>)
	I0819 17:15:56.526916   32581 status.go:257] ha-227346-m03 status: &{Name:ha-227346-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:15:56.526934   32581 status.go:255] checking status of ha-227346-m04 ...
	I0819 17:15:56.527315   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:56.527364   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:56.542320   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
	I0819 17:15:56.542728   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:56.543209   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:56.543232   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:56.543568   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:56.543776   32581 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:15:56.545380   32581 status.go:330] ha-227346-m04 host status = "Running" (err=<nil>)
	I0819 17:15:56.545400   32581 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:15:56.545772   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:56.545817   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:56.560701   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36491
	I0819 17:15:56.561193   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:56.561754   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:56.561773   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:56.562057   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:56.562232   32581 main.go:141] libmachine: (ha-227346-m04) Calling .GetIP
	I0819 17:15:56.564864   32581 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:15:56.565266   32581 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:15:56.565299   32581 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:15:56.565427   32581 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:15:56.565726   32581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:15:56.565763   32581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:15:56.580300   32581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
	I0819 17:15:56.580815   32581 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:15:56.581381   32581 main.go:141] libmachine: Using API Version  1
	I0819 17:15:56.581406   32581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:15:56.581710   32581 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:15:56.581915   32581 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:15:56.582125   32581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:15:56.582147   32581 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:15:56.585019   32581 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:15:56.585494   32581 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:15:56.585517   32581 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:15:56.585632   32581 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:15:56.585801   32581 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:15:56.585943   32581 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:15:56.586109   32581 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:15:56.666015   32581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:15:56.682641   32581 status.go:257] ha-227346-m04 status: &{Name:ha-227346-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-227346 -n ha-227346
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-227346 logs -n 25: (1.324718093s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile955442382/001/cp-test_ha-227346-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346:/home/docker/cp-test_ha-227346-m03_ha-227346.txt                      |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346 sudo cat                                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m03_ha-227346.txt                                |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m02:/home/docker/cp-test_ha-227346-m03_ha-227346-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m02 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m03_ha-227346-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04:/home/docker/cp-test_ha-227346-m03_ha-227346-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m04 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m03_ha-227346-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp testdata/cp-test.txt                                               | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile955442382/001/cp-test_ha-227346-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346:/home/docker/cp-test_ha-227346-m04_ha-227346.txt                      |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346 sudo cat                                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346.txt                                |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m02:/home/docker/cp-test_ha-227346-m04_ha-227346-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m02 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03:/home/docker/cp-test_ha-227346-m04_ha-227346-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m03 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-227346 node stop m02 -v=7                                                    | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:09:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:09:04.036568   28158 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:09:04.036858   28158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:09:04.036870   28158 out.go:358] Setting ErrFile to fd 2...
	I0819 17:09:04.036875   28158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:09:04.037049   28158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:09:04.037651   28158 out.go:352] Setting JSON to false
	I0819 17:09:04.038490   28158 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3089,"bootTime":1724084255,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:09:04.038542   28158 start.go:139] virtualization: kvm guest
	I0819 17:09:04.040721   28158 out.go:177] * [ha-227346] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:09:04.042005   28158 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:09:04.042023   28158 notify.go:220] Checking for updates...
	I0819 17:09:04.044532   28158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:09:04.045856   28158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:09:04.046961   28158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:09:04.048020   28158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:09:04.049070   28158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:09:04.050387   28158 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:09:04.083918   28158 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 17:09:04.085051   28158 start.go:297] selected driver: kvm2
	I0819 17:09:04.085070   28158 start.go:901] validating driver "kvm2" against <nil>
	I0819 17:09:04.085083   28158 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:09:04.086023   28158 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:09:04.086110   28158 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:09:04.100306   28158 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:09:04.100353   28158 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:09:04.100592   28158 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:09:04.100668   28158 cni.go:84] Creating CNI manager for ""
	I0819 17:09:04.100683   28158 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 17:09:04.100690   28158 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:09:04.100777   28158 start.go:340] cluster config:
	{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 17:09:04.100905   28158 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:09:04.102500   28158 out.go:177] * Starting "ha-227346" primary control-plane node in "ha-227346" cluster
	I0819 17:09:04.103613   28158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:09:04.103644   28158 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:09:04.103657   28158 cache.go:56] Caching tarball of preloaded images
	I0819 17:09:04.103727   28158 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:09:04.103738   28158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:09:04.104024   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:09:04.104055   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json: {Name:mk6e7d11c4e5aa09a7b1c55a1b184f3bbbc1bb77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:04.104199   28158 start.go:360] acquireMachinesLock for ha-227346: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:09:04.104247   28158 start.go:364] duration metric: took 24.55µs to acquireMachinesLock for "ha-227346"
	I0819 17:09:04.104270   28158 start.go:93] Provisioning new machine with config: &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:09:04.104337   28158 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 17:09:04.106016   28158 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 17:09:04.106149   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:04.106190   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:04.119554   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0819 17:09:04.119969   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:04.120492   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:04.120511   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:04.120808   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:04.121001   28158 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:09:04.121170   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:04.121311   28158 start.go:159] libmachine.API.Create for "ha-227346" (driver="kvm2")
	I0819 17:09:04.121338   28158 client.go:168] LocalClient.Create starting
	I0819 17:09:04.121368   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 17:09:04.121405   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:09:04.121434   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:09:04.121516   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 17:09:04.121542   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:09:04.121560   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:09:04.121596   28158 main.go:141] libmachine: Running pre-create checks...
	I0819 17:09:04.121614   28158 main.go:141] libmachine: (ha-227346) Calling .PreCreateCheck
	I0819 17:09:04.121929   28158 main.go:141] libmachine: (ha-227346) Calling .GetConfigRaw
	I0819 17:09:04.122249   28158 main.go:141] libmachine: Creating machine...
	I0819 17:09:04.122265   28158 main.go:141] libmachine: (ha-227346) Calling .Create
	I0819 17:09:04.122402   28158 main.go:141] libmachine: (ha-227346) Creating KVM machine...
	I0819 17:09:04.123482   28158 main.go:141] libmachine: (ha-227346) DBG | found existing default KVM network
	I0819 17:09:04.124096   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:04.123959   28181 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012b980}
	I0819 17:09:04.124120   28158 main.go:141] libmachine: (ha-227346) DBG | created network xml: 
	I0819 17:09:04.124132   28158 main.go:141] libmachine: (ha-227346) DBG | <network>
	I0819 17:09:04.124143   28158 main.go:141] libmachine: (ha-227346) DBG |   <name>mk-ha-227346</name>
	I0819 17:09:04.124151   28158 main.go:141] libmachine: (ha-227346) DBG |   <dns enable='no'/>
	I0819 17:09:04.124161   28158 main.go:141] libmachine: (ha-227346) DBG |   
	I0819 17:09:04.124171   28158 main.go:141] libmachine: (ha-227346) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 17:09:04.124180   28158 main.go:141] libmachine: (ha-227346) DBG |     <dhcp>
	I0819 17:09:04.124189   28158 main.go:141] libmachine: (ha-227346) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 17:09:04.124203   28158 main.go:141] libmachine: (ha-227346) DBG |     </dhcp>
	I0819 17:09:04.124215   28158 main.go:141] libmachine: (ha-227346) DBG |   </ip>
	I0819 17:09:04.124223   28158 main.go:141] libmachine: (ha-227346) DBG |   
	I0819 17:09:04.124231   28158 main.go:141] libmachine: (ha-227346) DBG | </network>
	I0819 17:09:04.124239   28158 main.go:141] libmachine: (ha-227346) DBG | 
	I0819 17:09:04.128999   28158 main.go:141] libmachine: (ha-227346) DBG | trying to create private KVM network mk-ha-227346 192.168.39.0/24...
	I0819 17:09:04.190799   28158 main.go:141] libmachine: (ha-227346) DBG | private KVM network mk-ha-227346 192.168.39.0/24 created
	I0819 17:09:04.190824   28158 main.go:141] libmachine: (ha-227346) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346 ...
	I0819 17:09:04.190834   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:04.190773   28181 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:09:04.190852   28158 main.go:141] libmachine: (ha-227346) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 17:09:04.190939   28158 main.go:141] libmachine: (ha-227346) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 17:09:04.471387   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:04.471287   28181 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa...
	I0819 17:09:05.097746   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:05.097640   28181 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/ha-227346.rawdisk...
	I0819 17:09:05.097791   28158 main.go:141] libmachine: (ha-227346) DBG | Writing magic tar header
	I0819 17:09:05.097802   28158 main.go:141] libmachine: (ha-227346) DBG | Writing SSH key tar header
	I0819 17:09:05.097810   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:05.097746   28181 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346 ...
	I0819 17:09:05.097830   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346
	I0819 17:09:05.097910   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346 (perms=drwx------)
	I0819 17:09:05.097940   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 17:09:05.097959   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 17:09:05.097970   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 17:09:05.097981   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 17:09:05.097991   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 17:09:05.098004   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 17:09:05.098017   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:09:05.098041   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 17:09:05.098059   28158 main.go:141] libmachine: (ha-227346) Creating domain...
	I0819 17:09:05.098071   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 17:09:05.098086   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins
	I0819 17:09:05.098096   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home
	I0819 17:09:05.098106   28158 main.go:141] libmachine: (ha-227346) DBG | Skipping /home - not owner
	I0819 17:09:05.099219   28158 main.go:141] libmachine: (ha-227346) define libvirt domain using xml: 
	I0819 17:09:05.099244   28158 main.go:141] libmachine: (ha-227346) <domain type='kvm'>
	I0819 17:09:05.099253   28158 main.go:141] libmachine: (ha-227346)   <name>ha-227346</name>
	I0819 17:09:05.099257   28158 main.go:141] libmachine: (ha-227346)   <memory unit='MiB'>2200</memory>
	I0819 17:09:05.099262   28158 main.go:141] libmachine: (ha-227346)   <vcpu>2</vcpu>
	I0819 17:09:05.099267   28158 main.go:141] libmachine: (ha-227346)   <features>
	I0819 17:09:05.099282   28158 main.go:141] libmachine: (ha-227346)     <acpi/>
	I0819 17:09:05.099311   28158 main.go:141] libmachine: (ha-227346)     <apic/>
	I0819 17:09:05.099322   28158 main.go:141] libmachine: (ha-227346)     <pae/>
	I0819 17:09:05.099334   28158 main.go:141] libmachine: (ha-227346)     
	I0819 17:09:05.099344   28158 main.go:141] libmachine: (ha-227346)   </features>
	I0819 17:09:05.099355   28158 main.go:141] libmachine: (ha-227346)   <cpu mode='host-passthrough'>
	I0819 17:09:05.099365   28158 main.go:141] libmachine: (ha-227346)   
	I0819 17:09:05.099370   28158 main.go:141] libmachine: (ha-227346)   </cpu>
	I0819 17:09:05.099377   28158 main.go:141] libmachine: (ha-227346)   <os>
	I0819 17:09:05.099381   28158 main.go:141] libmachine: (ha-227346)     <type>hvm</type>
	I0819 17:09:05.099387   28158 main.go:141] libmachine: (ha-227346)     <boot dev='cdrom'/>
	I0819 17:09:05.099395   28158 main.go:141] libmachine: (ha-227346)     <boot dev='hd'/>
	I0819 17:09:05.099405   28158 main.go:141] libmachine: (ha-227346)     <bootmenu enable='no'/>
	I0819 17:09:05.099415   28158 main.go:141] libmachine: (ha-227346)   </os>
	I0819 17:09:05.099428   28158 main.go:141] libmachine: (ha-227346)   <devices>
	I0819 17:09:05.099436   28158 main.go:141] libmachine: (ha-227346)     <disk type='file' device='cdrom'>
	I0819 17:09:05.099451   28158 main.go:141] libmachine: (ha-227346)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/boot2docker.iso'/>
	I0819 17:09:05.099462   28158 main.go:141] libmachine: (ha-227346)       <target dev='hdc' bus='scsi'/>
	I0819 17:09:05.099472   28158 main.go:141] libmachine: (ha-227346)       <readonly/>
	I0819 17:09:05.099476   28158 main.go:141] libmachine: (ha-227346)     </disk>
	I0819 17:09:05.099482   28158 main.go:141] libmachine: (ha-227346)     <disk type='file' device='disk'>
	I0819 17:09:05.099495   28158 main.go:141] libmachine: (ha-227346)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 17:09:05.099511   28158 main.go:141] libmachine: (ha-227346)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/ha-227346.rawdisk'/>
	I0819 17:09:05.099522   28158 main.go:141] libmachine: (ha-227346)       <target dev='hda' bus='virtio'/>
	I0819 17:09:05.099530   28158 main.go:141] libmachine: (ha-227346)     </disk>
	I0819 17:09:05.099541   28158 main.go:141] libmachine: (ha-227346)     <interface type='network'>
	I0819 17:09:05.099550   28158 main.go:141] libmachine: (ha-227346)       <source network='mk-ha-227346'/>
	I0819 17:09:05.099560   28158 main.go:141] libmachine: (ha-227346)       <model type='virtio'/>
	I0819 17:09:05.099584   28158 main.go:141] libmachine: (ha-227346)     </interface>
	I0819 17:09:05.099611   28158 main.go:141] libmachine: (ha-227346)     <interface type='network'>
	I0819 17:09:05.099624   28158 main.go:141] libmachine: (ha-227346)       <source network='default'/>
	I0819 17:09:05.099637   28158 main.go:141] libmachine: (ha-227346)       <model type='virtio'/>
	I0819 17:09:05.099648   28158 main.go:141] libmachine: (ha-227346)     </interface>
	I0819 17:09:05.099657   28158 main.go:141] libmachine: (ha-227346)     <serial type='pty'>
	I0819 17:09:05.099663   28158 main.go:141] libmachine: (ha-227346)       <target port='0'/>
	I0819 17:09:05.099671   28158 main.go:141] libmachine: (ha-227346)     </serial>
	I0819 17:09:05.099682   28158 main.go:141] libmachine: (ha-227346)     <console type='pty'>
	I0819 17:09:05.099699   28158 main.go:141] libmachine: (ha-227346)       <target type='serial' port='0'/>
	I0819 17:09:05.099714   28158 main.go:141] libmachine: (ha-227346)     </console>
	I0819 17:09:05.099726   28158 main.go:141] libmachine: (ha-227346)     <rng model='virtio'>
	I0819 17:09:05.099752   28158 main.go:141] libmachine: (ha-227346)       <backend model='random'>/dev/random</backend>
	I0819 17:09:05.099764   28158 main.go:141] libmachine: (ha-227346)     </rng>
	I0819 17:09:05.099786   28158 main.go:141] libmachine: (ha-227346)     
	I0819 17:09:05.099807   28158 main.go:141] libmachine: (ha-227346)     
	I0819 17:09:05.099821   28158 main.go:141] libmachine: (ha-227346)   </devices>
	I0819 17:09:05.099832   28158 main.go:141] libmachine: (ha-227346) </domain>
	I0819 17:09:05.099846   28158 main.go:141] libmachine: (ha-227346) 
	I0819 17:09:05.104727   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:75:31:56 in network default
	I0819 17:09:05.105291   28158 main.go:141] libmachine: (ha-227346) Ensuring networks are active...
	I0819 17:09:05.105306   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:05.106054   28158 main.go:141] libmachine: (ha-227346) Ensuring network default is active
	I0819 17:09:05.106404   28158 main.go:141] libmachine: (ha-227346) Ensuring network mk-ha-227346 is active
	I0819 17:09:05.106945   28158 main.go:141] libmachine: (ha-227346) Getting domain xml...
	I0819 17:09:05.107806   28158 main.go:141] libmachine: (ha-227346) Creating domain...
	I0819 17:09:06.292856   28158 main.go:141] libmachine: (ha-227346) Waiting to get IP...
	I0819 17:09:06.293520   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:06.293882   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:06.293921   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:06.293861   28181 retry.go:31] will retry after 227.629159ms: waiting for machine to come up
	I0819 17:09:06.523593   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:06.524114   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:06.524150   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:06.524075   28181 retry.go:31] will retry after 292.133348ms: waiting for machine to come up
	I0819 17:09:06.817457   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:06.817907   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:06.817934   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:06.817873   28181 retry.go:31] will retry after 467.412101ms: waiting for machine to come up
	I0819 17:09:07.286543   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:07.287005   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:07.287030   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:07.286964   28181 retry.go:31] will retry after 421.9896ms: waiting for machine to come up
	I0819 17:09:07.710440   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:07.710830   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:07.710878   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:07.710805   28181 retry.go:31] will retry after 531.369228ms: waiting for machine to come up
	I0819 17:09:08.243409   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:08.243763   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:08.243792   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:08.243725   28181 retry.go:31] will retry after 699.187629ms: waiting for machine to come up
	I0819 17:09:08.944004   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:08.944382   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:08.944414   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:08.944337   28181 retry.go:31] will retry after 867.603094ms: waiting for machine to come up
	I0819 17:09:09.813897   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:09.814274   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:09.814302   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:09.814254   28181 retry.go:31] will retry after 1.027123124s: waiting for machine to come up
	I0819 17:09:10.843615   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:10.844095   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:10.844112   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:10.844055   28181 retry.go:31] will retry after 1.833742027s: waiting for machine to come up
	I0819 17:09:12.678985   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:12.679365   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:12.679393   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:12.679325   28181 retry.go:31] will retry after 1.648162625s: waiting for machine to come up
	I0819 17:09:14.329269   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:14.329767   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:14.329793   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:14.329733   28181 retry.go:31] will retry after 2.105332646s: waiting for machine to come up
	I0819 17:09:16.437905   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:16.438313   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:16.438338   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:16.438267   28181 retry.go:31] will retry after 3.409284945s: waiting for machine to come up
	I0819 17:09:19.849512   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:19.849804   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:19.849826   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:19.849765   28181 retry.go:31] will retry after 3.80335016s: waiting for machine to come up
	I0819 17:09:23.657777   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.658164   28158 main.go:141] libmachine: (ha-227346) Found IP for machine: 192.168.39.205
	I0819 17:09:23.658186   28158 main.go:141] libmachine: (ha-227346) Reserving static IP address...
	I0819 17:09:23.658199   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has current primary IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.658540   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find host DHCP lease matching {name: "ha-227346", mac: "52:54:00:ba:14:7f", ip: "192.168.39.205"} in network mk-ha-227346
	I0819 17:09:23.729579   28158 main.go:141] libmachine: (ha-227346) DBG | Getting to WaitForSSH function...
	I0819 17:09:23.729609   28158 main.go:141] libmachine: (ha-227346) Reserved static IP address: 192.168.39.205
	I0819 17:09:23.729651   28158 main.go:141] libmachine: (ha-227346) Waiting for SSH to be available...
	I0819 17:09:23.731831   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.732172   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:23.732200   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.732324   28158 main.go:141] libmachine: (ha-227346) DBG | Using SSH client type: external
	I0819 17:09:23.732353   28158 main.go:141] libmachine: (ha-227346) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa (-rw-------)
	I0819 17:09:23.732379   28158 main.go:141] libmachine: (ha-227346) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:09:23.732406   28158 main.go:141] libmachine: (ha-227346) DBG | About to run SSH command:
	I0819 17:09:23.732420   28158 main.go:141] libmachine: (ha-227346) DBG | exit 0
	I0819 17:09:23.852556   28158 main.go:141] libmachine: (ha-227346) DBG | SSH cmd err, output: <nil>: 
	I0819 17:09:23.852900   28158 main.go:141] libmachine: (ha-227346) KVM machine creation complete!
	I0819 17:09:23.853275   28158 main.go:141] libmachine: (ha-227346) Calling .GetConfigRaw
	I0819 17:09:23.853865   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:23.854050   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:23.854227   28158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 17:09:23.854240   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:09:23.855460   28158 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 17:09:23.855476   28158 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 17:09:23.855484   28158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 17:09:23.855492   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:23.857441   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.857748   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:23.857778   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.857907   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:23.858060   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:23.858268   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:23.858413   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:23.858615   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:23.858823   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:23.858837   28158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 17:09:23.959980   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:09:23.960000   28158 main.go:141] libmachine: Detecting the provisioner...
	I0819 17:09:23.960008   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:23.962895   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.963242   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:23.963279   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.963526   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:23.963762   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:23.963985   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:23.964133   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:23.964334   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:23.964506   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:23.964517   28158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 17:09:24.064978   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 17:09:24.065054   28158 main.go:141] libmachine: found compatible host: buildroot
	I0819 17:09:24.065064   28158 main.go:141] libmachine: Provisioning with buildroot...
	I0819 17:09:24.065071   28158 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:09:24.065314   28158 buildroot.go:166] provisioning hostname "ha-227346"
	I0819 17:09:24.065344   28158 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:09:24.065521   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.068050   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.068401   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.068434   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.068541   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:24.068712   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.068858   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.068991   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:24.069147   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:24.069424   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:24.069445   28158 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-227346 && echo "ha-227346" | sudo tee /etc/hostname
	I0819 17:09:24.186415   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-227346
	
	I0819 17:09:24.186461   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.189142   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.189471   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.189500   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.189705   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:24.189888   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.190038   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.190264   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:24.190470   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:24.190676   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:24.190692   28158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-227346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-227346/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-227346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:09:24.300639   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:09:24.300668   28158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:09:24.300715   28158 buildroot.go:174] setting up certificates
	I0819 17:09:24.300727   28158 provision.go:84] configureAuth start
	I0819 17:09:24.300739   28158 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:09:24.301042   28158 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:09:24.303526   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.303973   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.304001   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.304100   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.306151   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.306474   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.306512   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.306634   28158 provision.go:143] copyHostCerts
	I0819 17:09:24.306667   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:09:24.306711   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:09:24.306721   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:09:24.306817   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:09:24.306937   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:09:24.306966   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:09:24.306977   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:09:24.307110   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:09:24.307251   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:09:24.307281   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:09:24.307290   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:09:24.307343   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:09:24.307426   28158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.ha-227346 san=[127.0.0.1 192.168.39.205 ha-227346 localhost minikube]
	I0819 17:09:24.552566   28158 provision.go:177] copyRemoteCerts
	I0819 17:09:24.552628   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:09:24.552653   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.555270   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.555563   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.555587   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.555810   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:24.556008   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.556156   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:24.556273   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:24.638518   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 17:09:24.638585   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:09:24.660635   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 17:09:24.660695   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 17:09:24.681651   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 17:09:24.681720   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 17:09:24.702480   28158 provision.go:87] duration metric: took 401.737805ms to configureAuth
	I0819 17:09:24.702522   28158 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:09:24.702692   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:09:24.702774   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.705652   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.705986   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.706010   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.706188   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:24.706389   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.706517   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.706624   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:24.706739   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:24.706894   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:24.706909   28158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:09:24.957848   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:09:24.957879   28158 main.go:141] libmachine: Checking connection to Docker...
	I0819 17:09:24.957891   28158 main.go:141] libmachine: (ha-227346) Calling .GetURL
	I0819 17:09:24.959356   28158 main.go:141] libmachine: (ha-227346) DBG | Using libvirt version 6000000
	I0819 17:09:24.962223   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.962592   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.962632   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.962771   28158 main.go:141] libmachine: Docker is up and running!
	I0819 17:09:24.962787   28158 main.go:141] libmachine: Reticulating splines...
	I0819 17:09:24.962795   28158 client.go:171] duration metric: took 20.841449041s to LocalClient.Create
	I0819 17:09:24.962820   28158 start.go:167] duration metric: took 20.84150978s to libmachine.API.Create "ha-227346"
	I0819 17:09:24.962830   28158 start.go:293] postStartSetup for "ha-227346" (driver="kvm2")
	I0819 17:09:24.962840   28158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:09:24.962856   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:24.963099   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:09:24.963127   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.965414   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.965734   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.965759   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.965899   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:24.966066   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.966221   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:24.966357   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:25.046570   28158 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:09:25.050669   28158 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:09:25.050691   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:09:25.050750   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:09:25.050817   28158 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:09:25.050826   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /etc/ssl/certs/178372.pem
	I0819 17:09:25.050910   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:09:25.060113   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:09:25.085218   28158 start.go:296] duration metric: took 122.376609ms for postStartSetup
	I0819 17:09:25.085264   28158 main.go:141] libmachine: (ha-227346) Calling .GetConfigRaw
	I0819 17:09:25.085816   28158 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:09:25.088323   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.088814   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:25.088839   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.089092   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:09:25.089262   28158 start.go:128] duration metric: took 20.984914626s to createHost
	I0819 17:09:25.089283   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:25.091507   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.091809   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:25.091835   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.091982   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:25.092163   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:25.092315   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:25.092444   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:25.092595   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:25.092816   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:25.092829   28158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:09:25.197214   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724087365.170604273
	
	I0819 17:09:25.197239   28158 fix.go:216] guest clock: 1724087365.170604273
	I0819 17:09:25.197251   28158 fix.go:229] Guest: 2024-08-19 17:09:25.170604273 +0000 UTC Remote: 2024-08-19 17:09:25.089273109 +0000 UTC m=+21.086006962 (delta=81.331164ms)
	I0819 17:09:25.197275   28158 fix.go:200] guest clock delta is within tolerance: 81.331164ms
	I0819 17:09:25.197281   28158 start.go:83] releasing machines lock for "ha-227346", held for 21.09302376s
	I0819 17:09:25.197302   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:25.197582   28158 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:09:25.199941   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.200256   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:25.200280   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.200448   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:25.200927   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:25.201087   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:25.201180   28158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:09:25.201220   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:25.201266   28158 ssh_runner.go:195] Run: cat /version.json
	I0819 17:09:25.201287   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:25.203827   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.203865   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.204170   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:25.204196   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.204266   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:25.204293   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.204341   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:25.204512   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:25.204518   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:25.204677   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:25.204695   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:25.204777   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:25.204855   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:25.204894   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:25.311166   28158 ssh_runner.go:195] Run: systemctl --version
	I0819 17:09:25.316668   28158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:09:25.475482   28158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:09:25.480908   28158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:09:25.480978   28158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:09:25.495713   28158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 17:09:25.495735   28158 start.go:495] detecting cgroup driver to use...
	I0819 17:09:25.495796   28158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:09:25.510747   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:09:25.526102   28158 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:09:25.526171   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:09:25.539078   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:09:25.552018   28158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:09:25.657812   28158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:09:25.817472   28158 docker.go:233] disabling docker service ...
	I0819 17:09:25.817548   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:09:25.831346   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:09:25.843914   28158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:09:25.976957   28158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:09:26.104356   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:09:26.117589   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:09:26.135726   28158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:09:26.135792   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.145784   28158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:09:26.145853   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.155900   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.165633   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.175589   28158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:09:26.185415   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.194740   28158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.210069   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.219701   28158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:09:26.228426   28158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 17:09:26.228485   28158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 17:09:26.240858   28158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:09:26.249419   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:09:26.365160   28158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:09:26.488729   28158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:09:26.488808   28158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:09:26.493106   28158 start.go:563] Will wait 60s for crictl version
	I0819 17:09:26.493164   28158 ssh_runner.go:195] Run: which crictl
	I0819 17:09:26.496562   28158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:09:26.535776   28158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:09:26.535866   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:09:26.561181   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:09:26.588190   28158 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:09:26.589563   28158 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:09:26.592126   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:26.592484   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:26.592512   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:26.592732   28158 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:09:26.596466   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:09:26.608306   28158 kubeadm.go:883] updating cluster {Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:09:26.608412   28158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:09:26.608482   28158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:09:26.638050   28158 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 17:09:26.638114   28158 ssh_runner.go:195] Run: which lz4
	I0819 17:09:26.641598   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 17:09:26.641681   28158 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 17:09:26.645389   28158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 17:09:26.645417   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 17:09:27.760700   28158 crio.go:462] duration metric: took 1.119047314s to copy over tarball
	I0819 17:09:27.760788   28158 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 17:09:29.722576   28158 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.96176146s)
	I0819 17:09:29.722601   28158 crio.go:469] duration metric: took 1.961880124s to extract the tarball
	I0819 17:09:29.722609   28158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 17:09:29.758382   28158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:09:29.802261   28158 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:09:29.802284   28158 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:09:29.802293   28158 kubeadm.go:934] updating node { 192.168.39.205 8443 v1.31.0 crio true true} ...
	I0819 17:09:29.802409   28158 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-227346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:09:29.802484   28158 ssh_runner.go:195] Run: crio config
	I0819 17:09:29.844677   28158 cni.go:84] Creating CNI manager for ""
	I0819 17:09:29.844694   28158 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 17:09:29.844709   28158 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:09:29.844731   28158 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-227346 NodeName:ha-227346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:09:29.844894   28158 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-227346"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:09:29.844917   28158 kube-vip.go:115] generating kube-vip config ...
	I0819 17:09:29.844965   28158 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 17:09:29.861764   28158 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 17:09:29.861866   28158 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 17:09:29.861916   28158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:09:29.870992   28158 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:09:29.871059   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 17:09:29.879608   28158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 17:09:29.894571   28158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:09:29.909206   28158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 17:09:29.924181   28158 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0819 17:09:29.938999   28158 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 17:09:29.942670   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:09:29.953367   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:09:30.069400   28158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:09:30.085973   28158 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346 for IP: 192.168.39.205
	I0819 17:09:30.085999   28158 certs.go:194] generating shared ca certs ...
	I0819 17:09:30.086015   28158 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.086198   28158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:09:30.086254   28158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:09:30.086268   28158 certs.go:256] generating profile certs ...
	I0819 17:09:30.086342   28158 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key
	I0819 17:09:30.086359   28158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt with IP's: []
	I0819 17:09:30.173064   28158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt ...
	I0819 17:09:30.173092   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt: {Name:mk591f421539a106f08e5c1d174e11dc33c0a5bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.173272   28158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key ...
	I0819 17:09:30.173285   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key: {Name:mkd462373711801288a4ce7966c2b6d712194477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.173388   28158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.f854639b
	I0819 17:09:30.173404   28158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.f854639b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205 192.168.39.254]
	I0819 17:09:30.233812   28158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.f854639b ...
	I0819 17:09:30.233839   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.f854639b: {Name:mkb651d7d4607b62d21d16ba15b130759f43fa27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.233994   28158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.f854639b ...
	I0819 17:09:30.234006   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.f854639b: {Name:mk93f62ffdd65b89624f041e2ccf7fba11f0a010 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.234095   28158 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.f854639b -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt
	I0819 17:09:30.234174   28158 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.f854639b -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key
	I0819 17:09:30.234227   28158 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key
	I0819 17:09:30.234242   28158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt with IP's: []
	I0819 17:09:30.300093   28158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt ...
	I0819 17:09:30.300121   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt: {Name:mk537dacc775b012dc5337f6a018fbc6b28b2cc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.300264   28158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key ...
	I0819 17:09:30.300281   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key: {Name:mk21b5ccc1585a537d1750c1265bac520761ee51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.300347   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 17:09:30.300364   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 17:09:30.300379   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 17:09:30.300392   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 17:09:30.300402   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 17:09:30.300414   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 17:09:30.300423   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 17:09:30.300435   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 17:09:30.300480   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:09:30.300511   28158 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:09:30.300520   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:09:30.300540   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:09:30.300565   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:09:30.300588   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:09:30.300636   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:09:30.300661   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem -> /usr/share/ca-certificates/17837.pem
	I0819 17:09:30.300673   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /usr/share/ca-certificates/178372.pem
	I0819 17:09:30.300686   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:09:30.301212   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:09:30.325076   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:09:30.347257   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:09:30.368721   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:09:30.389960   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 17:09:30.411449   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:09:30.433372   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:09:30.455374   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:09:30.476292   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:09:30.497100   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:09:30.517682   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:09:30.538854   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:09:30.554208   28158 ssh_runner.go:195] Run: openssl version
	I0819 17:09:30.559810   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:09:30.569973   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:09:30.573945   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:09:30.574001   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:09:30.579263   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:09:30.589001   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:09:30.598822   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:09:30.602904   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:09:30.602967   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:09:30.608088   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:09:30.617950   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:09:30.628103   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:09:30.632174   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:09:30.632224   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:09:30.637537   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:09:30.647799   28158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:09:30.651723   28158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:09:30.651778   28158 kubeadm.go:392] StartCluster: {Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:09:30.651861   28158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:09:30.651913   28158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:09:30.687533   28158 cri.go:89] found id: ""
	I0819 17:09:30.687611   28158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 17:09:30.697284   28158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 17:09:30.706179   28158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:09:30.714803   28158 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:09:30.714820   28158 kubeadm.go:157] found existing configuration files:
	
	I0819 17:09:30.714863   28158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:09:30.723039   28158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:09:30.723084   28158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:09:30.731744   28158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:09:30.740029   28158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:09:30.740091   28158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:09:30.748716   28158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:09:30.756943   28158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:09:30.756995   28158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:09:30.765443   28158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:09:30.773460   28158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:09:30.773504   28158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:09:30.781847   28158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 17:09:30.872907   28158 kubeadm.go:310] W0819 17:09:30.853430     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:09:30.873644   28158 kubeadm.go:310] W0819 17:09:30.854326     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:09:31.001162   28158 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 17:09:41.405511   28158 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 17:09:41.405574   28158 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:09:41.405680   28158 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:09:41.405898   28158 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:09:41.405990   28158 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 17:09:41.406059   28158 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:09:41.407930   28158 out.go:235]   - Generating certificates and keys ...
	I0819 17:09:41.407996   28158 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:09:41.408111   28158 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:09:41.408229   28158 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 17:09:41.408308   28158 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 17:09:41.408400   28158 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 17:09:41.408477   28158 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 17:09:41.408541   28158 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 17:09:41.408646   28158 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-227346 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0819 17:09:41.408693   28158 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 17:09:41.408807   28158 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-227346 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0819 17:09:41.408862   28158 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 17:09:41.408942   28158 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 17:09:41.409012   28158 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 17:09:41.409100   28158 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:09:41.409171   28158 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:09:41.409249   28158 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 17:09:41.409329   28158 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:09:41.409415   28158 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:09:41.409486   28158 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:09:41.409602   28158 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:09:41.409677   28158 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:09:41.412032   28158 out.go:235]   - Booting up control plane ...
	I0819 17:09:41.412113   28158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:09:41.412175   28158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:09:41.412229   28158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:09:41.412317   28158 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:09:41.412396   28158 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:09:41.412430   28158 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:09:41.412561   28158 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 17:09:41.412670   28158 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 17:09:41.412720   28158 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001230439s
	I0819 17:09:41.412841   28158 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 17:09:41.412940   28158 kubeadm.go:310] [api-check] The API server is healthy after 5.675453497s
	I0819 17:09:41.413064   28158 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 17:09:41.413202   28158 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 17:09:41.413254   28158 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 17:09:41.413406   28158 kubeadm.go:310] [mark-control-plane] Marking the node ha-227346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 17:09:41.413452   28158 kubeadm.go:310] [bootstrap-token] Using token: bnwy1v.t48ncxxc2fkxdt25
	I0819 17:09:41.414871   28158 out.go:235]   - Configuring RBAC rules ...
	I0819 17:09:41.414952   28158 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 17:09:41.415053   28158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 17:09:41.415232   28158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 17:09:41.415361   28158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 17:09:41.415460   28158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 17:09:41.415555   28158 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 17:09:41.415688   28158 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 17:09:41.415754   28158 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 17:09:41.415827   28158 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 17:09:41.415836   28158 kubeadm.go:310] 
	I0819 17:09:41.415916   28158 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 17:09:41.415924   28158 kubeadm.go:310] 
	I0819 17:09:41.416025   28158 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 17:09:41.416034   28158 kubeadm.go:310] 
	I0819 17:09:41.416068   28158 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 17:09:41.416147   28158 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 17:09:41.416210   28158 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 17:09:41.416219   28158 kubeadm.go:310] 
	I0819 17:09:41.416262   28158 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 17:09:41.416268   28158 kubeadm.go:310] 
	I0819 17:09:41.416310   28158 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 17:09:41.416314   28158 kubeadm.go:310] 
	I0819 17:09:41.416359   28158 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 17:09:41.416429   28158 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 17:09:41.416486   28158 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 17:09:41.416495   28158 kubeadm.go:310] 
	I0819 17:09:41.416567   28158 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 17:09:41.416639   28158 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 17:09:41.416645   28158 kubeadm.go:310] 
	I0819 17:09:41.416772   28158 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bnwy1v.t48ncxxc2fkxdt25 \
	I0819 17:09:41.416861   28158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 17:09:41.416882   28158 kubeadm.go:310] 	--control-plane 
	I0819 17:09:41.416887   28158 kubeadm.go:310] 
	I0819 17:09:41.416986   28158 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 17:09:41.417003   28158 kubeadm.go:310] 
	I0819 17:09:41.417120   28158 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bnwy1v.t48ncxxc2fkxdt25 \
	I0819 17:09:41.417278   28158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 17:09:41.417290   28158 cni.go:84] Creating CNI manager for ""
	I0819 17:09:41.417296   28158 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 17:09:41.418782   28158 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 17:09:41.419956   28158 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 17:09:41.426412   28158 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 17:09:41.426433   28158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 17:09:41.447079   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 17:09:41.821869   28158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 17:09:41.821949   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:41.821963   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-227346 minikube.k8s.io/updated_at=2024_08_19T17_09_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-227346 minikube.k8s.io/primary=true
	I0819 17:09:41.865646   28158 ops.go:34] apiserver oom_adj: -16
	I0819 17:09:41.986516   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:42.487361   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:42.987265   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:43.487112   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:43.987266   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:44.486579   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:44.987306   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:45.091004   28158 kubeadm.go:1113] duration metric: took 3.269118599s to wait for elevateKubeSystemPrivileges
	I0819 17:09:45.091040   28158 kubeadm.go:394] duration metric: took 14.439266352s to StartCluster
	I0819 17:09:45.091058   28158 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:45.091133   28158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:09:45.091898   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:45.092107   28158 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:09:45.092125   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 17:09:45.092141   28158 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 17:09:45.092181   28158 addons.go:69] Setting storage-provisioner=true in profile "ha-227346"
	I0819 17:09:45.092206   28158 addons.go:234] Setting addon storage-provisioner=true in "ha-227346"
	I0819 17:09:45.092131   28158 start.go:241] waiting for startup goroutines ...
	I0819 17:09:45.092228   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:09:45.092228   28158 addons.go:69] Setting default-storageclass=true in profile "ha-227346"
	I0819 17:09:45.092334   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:09:45.092369   28158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-227346"
	I0819 17:09:45.092693   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:45.092724   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:45.092852   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:45.092885   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:45.107380   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I0819 17:09:45.107694   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0819 17:09:45.107911   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:45.108031   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:45.108413   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:45.108431   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:45.108559   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:45.108583   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:45.108766   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:45.108898   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:45.109077   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:09:45.109260   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:45.109301   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:45.111477   28158 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:09:45.111842   28158 kapi.go:59] client config for ha-227346: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt", KeyFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key", CAFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 17:09:45.112339   28158 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 17:09:45.112653   28158 addons.go:234] Setting addon default-storageclass=true in "ha-227346"
	I0819 17:09:45.112694   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:09:45.113101   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:45.113144   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:45.124830   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I0819 17:09:45.125358   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:45.125968   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:45.125995   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:45.126325   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:45.126508   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:09:45.127326   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I0819 17:09:45.127754   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:45.128255   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:45.128278   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:45.128289   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:45.128611   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:45.129042   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:45.129071   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:45.130146   28158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:09:45.131435   28158 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:09:45.131458   28158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 17:09:45.131477   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:45.134424   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:45.134892   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:45.134931   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:45.135207   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:45.135404   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:45.135594   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:45.135764   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:45.144965   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0819 17:09:45.145478   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:45.145983   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:45.146005   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:45.146321   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:45.146515   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:09:45.148096   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:45.148319   28158 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 17:09:45.148334   28158 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 17:09:45.148351   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:45.151255   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:45.151672   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:45.151699   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:45.151846   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:45.151989   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:45.152110   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:45.152237   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:45.259248   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 17:09:45.270732   28158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:09:45.349207   28158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 17:09:45.826885   28158 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 17:09:46.136930   28158 main.go:141] libmachine: Making call to close driver server
	I0819 17:09:46.136956   28158 main.go:141] libmachine: (ha-227346) Calling .Close
	I0819 17:09:46.137002   28158 main.go:141] libmachine: Making call to close driver server
	I0819 17:09:46.137024   28158 main.go:141] libmachine: (ha-227346) Calling .Close
	I0819 17:09:46.137230   28158 main.go:141] libmachine: (ha-227346) DBG | Closing plugin on server side
	I0819 17:09:46.137270   28158 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:09:46.137283   28158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:09:46.137291   28158 main.go:141] libmachine: Making call to close driver server
	I0819 17:09:46.137290   28158 main.go:141] libmachine: (ha-227346) DBG | Closing plugin on server side
	I0819 17:09:46.137299   28158 main.go:141] libmachine: (ha-227346) Calling .Close
	I0819 17:09:46.137381   28158 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:09:46.137396   28158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:09:46.137409   28158 main.go:141] libmachine: Making call to close driver server
	I0819 17:09:46.137420   28158 main.go:141] libmachine: (ha-227346) Calling .Close
	I0819 17:09:46.137498   28158 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:09:46.137511   28158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:09:46.137561   28158 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 17:09:46.137580   28158 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 17:09:46.137669   28158 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 17:09:46.137680   28158 round_trippers.go:469] Request Headers:
	I0819 17:09:46.137690   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:09:46.137701   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:09:46.137769   28158 main.go:141] libmachine: (ha-227346) DBG | Closing plugin on server side
	I0819 17:09:46.137936   28158 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:09:46.137961   28158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:09:46.153124   28158 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0819 17:09:46.153701   28158 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 17:09:46.153715   28158 round_trippers.go:469] Request Headers:
	I0819 17:09:46.153722   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:09:46.153726   28158 round_trippers.go:473]     Content-Type: application/json
	I0819 17:09:46.153730   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:09:46.158567   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:09:46.158753   28158 main.go:141] libmachine: Making call to close driver server
	I0819 17:09:46.158768   28158 main.go:141] libmachine: (ha-227346) Calling .Close
	I0819 17:09:46.159003   28158 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:09:46.159021   28158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:09:46.159031   28158 main.go:141] libmachine: (ha-227346) DBG | Closing plugin on server side
	I0819 17:09:46.160867   28158 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 17:09:46.162108   28158 addons.go:510] duration metric: took 1.069969082s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 17:09:46.162137   28158 start.go:246] waiting for cluster config update ...
	I0819 17:09:46.162150   28158 start.go:255] writing updated cluster config ...
	I0819 17:09:46.163441   28158 out.go:201] 
	I0819 17:09:46.164979   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:09:46.165041   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:09:46.166673   28158 out.go:177] * Starting "ha-227346-m02" control-plane node in "ha-227346" cluster
	I0819 17:09:46.168152   28158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:09:46.168176   28158 cache.go:56] Caching tarball of preloaded images
	I0819 17:09:46.168257   28158 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:09:46.168268   28158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:09:46.168330   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:09:46.168520   28158 start.go:360] acquireMachinesLock for ha-227346-m02: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:09:46.168567   28158 start.go:364] duration metric: took 28.205µs to acquireMachinesLock for "ha-227346-m02"
	I0819 17:09:46.168593   28158 start.go:93] Provisioning new machine with config: &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:09:46.168680   28158 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0819 17:09:46.170178   28158 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 17:09:46.170260   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:46.170288   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:46.184726   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I0819 17:09:46.185219   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:46.185642   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:46.185661   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:46.186021   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:46.186241   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetMachineName
	I0819 17:09:46.186444   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:09:46.186620   28158 start.go:159] libmachine.API.Create for "ha-227346" (driver="kvm2")
	I0819 17:09:46.186643   28158 client.go:168] LocalClient.Create starting
	I0819 17:09:46.186676   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 17:09:46.186715   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:09:46.186732   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:09:46.186807   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 17:09:46.186838   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:09:46.186854   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:09:46.186882   28158 main.go:141] libmachine: Running pre-create checks...
	I0819 17:09:46.186893   28158 main.go:141] libmachine: (ha-227346-m02) Calling .PreCreateCheck
	I0819 17:09:46.187097   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetConfigRaw
	I0819 17:09:46.187488   28158 main.go:141] libmachine: Creating machine...
	I0819 17:09:46.187501   28158 main.go:141] libmachine: (ha-227346-m02) Calling .Create
	I0819 17:09:46.187656   28158 main.go:141] libmachine: (ha-227346-m02) Creating KVM machine...
	I0819 17:09:46.189110   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found existing default KVM network
	I0819 17:09:46.189234   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found existing private KVM network mk-ha-227346
	I0819 17:09:46.189390   28158 main.go:141] libmachine: (ha-227346-m02) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02 ...
	I0819 17:09:46.189435   28158 main.go:141] libmachine: (ha-227346-m02) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 17:09:46.189452   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:46.189357   28513 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:09:46.189540   28158 main.go:141] libmachine: (ha-227346-m02) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 17:09:46.423799   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:46.423674   28513 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa...
	I0819 17:09:46.514853   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:46.514745   28513 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/ha-227346-m02.rawdisk...
	I0819 17:09:46.514876   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Writing magic tar header
	I0819 17:09:46.514886   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Writing SSH key tar header
	I0819 17:09:46.514894   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:46.514850   28513 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02 ...
	I0819 17:09:46.514980   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02
	I0819 17:09:46.514997   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 17:09:46.515005   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02 (perms=drwx------)
	I0819 17:09:46.515012   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:09:46.515024   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 17:09:46.515031   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 17:09:46.515043   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 17:09:46.515049   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 17:09:46.515059   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 17:09:46.515066   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 17:09:46.515074   28158 main.go:141] libmachine: (ha-227346-m02) Creating domain...
	I0819 17:09:46.515099   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 17:09:46.515123   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins
	I0819 17:09:46.515139   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home
	I0819 17:09:46.515150   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Skipping /home - not owner
	I0819 17:09:46.516140   28158 main.go:141] libmachine: (ha-227346-m02) define libvirt domain using xml: 
	I0819 17:09:46.516155   28158 main.go:141] libmachine: (ha-227346-m02) <domain type='kvm'>
	I0819 17:09:46.516161   28158 main.go:141] libmachine: (ha-227346-m02)   <name>ha-227346-m02</name>
	I0819 17:09:46.516166   28158 main.go:141] libmachine: (ha-227346-m02)   <memory unit='MiB'>2200</memory>
	I0819 17:09:46.516189   28158 main.go:141] libmachine: (ha-227346-m02)   <vcpu>2</vcpu>
	I0819 17:09:46.516206   28158 main.go:141] libmachine: (ha-227346-m02)   <features>
	I0819 17:09:46.516212   28158 main.go:141] libmachine: (ha-227346-m02)     <acpi/>
	I0819 17:09:46.516217   28158 main.go:141] libmachine: (ha-227346-m02)     <apic/>
	I0819 17:09:46.516222   28158 main.go:141] libmachine: (ha-227346-m02)     <pae/>
	I0819 17:09:46.516229   28158 main.go:141] libmachine: (ha-227346-m02)     
	I0819 17:09:46.516234   28158 main.go:141] libmachine: (ha-227346-m02)   </features>
	I0819 17:09:46.516242   28158 main.go:141] libmachine: (ha-227346-m02)   <cpu mode='host-passthrough'>
	I0819 17:09:46.516247   28158 main.go:141] libmachine: (ha-227346-m02)   
	I0819 17:09:46.516252   28158 main.go:141] libmachine: (ha-227346-m02)   </cpu>
	I0819 17:09:46.516257   28158 main.go:141] libmachine: (ha-227346-m02)   <os>
	I0819 17:09:46.516264   28158 main.go:141] libmachine: (ha-227346-m02)     <type>hvm</type>
	I0819 17:09:46.516269   28158 main.go:141] libmachine: (ha-227346-m02)     <boot dev='cdrom'/>
	I0819 17:09:46.516274   28158 main.go:141] libmachine: (ha-227346-m02)     <boot dev='hd'/>
	I0819 17:09:46.516280   28158 main.go:141] libmachine: (ha-227346-m02)     <bootmenu enable='no'/>
	I0819 17:09:46.516290   28158 main.go:141] libmachine: (ha-227346-m02)   </os>
	I0819 17:09:46.516296   28158 main.go:141] libmachine: (ha-227346-m02)   <devices>
	I0819 17:09:46.516306   28158 main.go:141] libmachine: (ha-227346-m02)     <disk type='file' device='cdrom'>
	I0819 17:09:46.516338   28158 main.go:141] libmachine: (ha-227346-m02)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/boot2docker.iso'/>
	I0819 17:09:46.516361   28158 main.go:141] libmachine: (ha-227346-m02)       <target dev='hdc' bus='scsi'/>
	I0819 17:09:46.516374   28158 main.go:141] libmachine: (ha-227346-m02)       <readonly/>
	I0819 17:09:46.516383   28158 main.go:141] libmachine: (ha-227346-m02)     </disk>
	I0819 17:09:46.516398   28158 main.go:141] libmachine: (ha-227346-m02)     <disk type='file' device='disk'>
	I0819 17:09:46.516412   28158 main.go:141] libmachine: (ha-227346-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 17:09:46.516428   28158 main.go:141] libmachine: (ha-227346-m02)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/ha-227346-m02.rawdisk'/>
	I0819 17:09:46.516440   28158 main.go:141] libmachine: (ha-227346-m02)       <target dev='hda' bus='virtio'/>
	I0819 17:09:46.516449   28158 main.go:141] libmachine: (ha-227346-m02)     </disk>
	I0819 17:09:46.516463   28158 main.go:141] libmachine: (ha-227346-m02)     <interface type='network'>
	I0819 17:09:46.516475   28158 main.go:141] libmachine: (ha-227346-m02)       <source network='mk-ha-227346'/>
	I0819 17:09:46.516488   28158 main.go:141] libmachine: (ha-227346-m02)       <model type='virtio'/>
	I0819 17:09:46.516498   28158 main.go:141] libmachine: (ha-227346-m02)     </interface>
	I0819 17:09:46.516509   28158 main.go:141] libmachine: (ha-227346-m02)     <interface type='network'>
	I0819 17:09:46.516525   28158 main.go:141] libmachine: (ha-227346-m02)       <source network='default'/>
	I0819 17:09:46.516537   28158 main.go:141] libmachine: (ha-227346-m02)       <model type='virtio'/>
	I0819 17:09:46.516548   28158 main.go:141] libmachine: (ha-227346-m02)     </interface>
	I0819 17:09:46.516558   28158 main.go:141] libmachine: (ha-227346-m02)     <serial type='pty'>
	I0819 17:09:46.516569   28158 main.go:141] libmachine: (ha-227346-m02)       <target port='0'/>
	I0819 17:09:46.516581   28158 main.go:141] libmachine: (ha-227346-m02)     </serial>
	I0819 17:09:46.516591   28158 main.go:141] libmachine: (ha-227346-m02)     <console type='pty'>
	I0819 17:09:46.516601   28158 main.go:141] libmachine: (ha-227346-m02)       <target type='serial' port='0'/>
	I0819 17:09:46.516617   28158 main.go:141] libmachine: (ha-227346-m02)     </console>
	I0819 17:09:46.516639   28158 main.go:141] libmachine: (ha-227346-m02)     <rng model='virtio'>
	I0819 17:09:46.516651   28158 main.go:141] libmachine: (ha-227346-m02)       <backend model='random'>/dev/random</backend>
	I0819 17:09:46.516663   28158 main.go:141] libmachine: (ha-227346-m02)     </rng>
	I0819 17:09:46.516673   28158 main.go:141] libmachine: (ha-227346-m02)     
	I0819 17:09:46.516683   28158 main.go:141] libmachine: (ha-227346-m02)     
	I0819 17:09:46.516691   28158 main.go:141] libmachine: (ha-227346-m02)   </devices>
	I0819 17:09:46.516714   28158 main.go:141] libmachine: (ha-227346-m02) </domain>
	I0819 17:09:46.516736   28158 main.go:141] libmachine: (ha-227346-m02) 
	I0819 17:09:46.523013   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:a9:0c:a1 in network default
	I0819 17:09:46.523580   28158 main.go:141] libmachine: (ha-227346-m02) Ensuring networks are active...
	I0819 17:09:46.523618   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:46.524219   28158 main.go:141] libmachine: (ha-227346-m02) Ensuring network default is active
	I0819 17:09:46.524528   28158 main.go:141] libmachine: (ha-227346-m02) Ensuring network mk-ha-227346 is active
	I0819 17:09:46.524908   28158 main.go:141] libmachine: (ha-227346-m02) Getting domain xml...
	I0819 17:09:46.525627   28158 main.go:141] libmachine: (ha-227346-m02) Creating domain...
	I0819 17:09:47.735681   28158 main.go:141] libmachine: (ha-227346-m02) Waiting to get IP...
	I0819 17:09:47.736569   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:47.736998   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:47.737018   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:47.736985   28513 retry.go:31] will retry after 188.449394ms: waiting for machine to come up
	I0819 17:09:47.927306   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:47.927798   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:47.927825   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:47.927762   28513 retry.go:31] will retry after 311.299545ms: waiting for machine to come up
	I0819 17:09:48.240293   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:48.240731   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:48.240770   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:48.240687   28513 retry.go:31] will retry after 426.822946ms: waiting for machine to come up
	I0819 17:09:48.669457   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:48.669960   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:48.669991   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:48.669909   28513 retry.go:31] will retry after 460.253566ms: waiting for machine to come up
	I0819 17:09:49.131460   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:49.131973   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:49.132013   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:49.131903   28513 retry.go:31] will retry after 659.325431ms: waiting for machine to come up
	I0819 17:09:49.792742   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:49.793238   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:49.793266   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:49.793188   28513 retry.go:31] will retry after 842.316805ms: waiting for machine to come up
	I0819 17:09:50.637184   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:50.637555   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:50.637581   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:50.637523   28513 retry.go:31] will retry after 891.20218ms: waiting for machine to come up
	I0819 17:09:51.529869   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:51.530353   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:51.530376   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:51.530303   28513 retry.go:31] will retry after 968.497872ms: waiting for machine to come up
	I0819 17:09:52.500332   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:52.500737   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:52.500781   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:52.500683   28513 retry.go:31] will retry after 1.361966722s: waiting for machine to come up
	I0819 17:09:53.864084   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:53.864538   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:53.864574   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:53.864484   28513 retry.go:31] will retry after 1.418071931s: waiting for machine to come up
	I0819 17:09:55.285394   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:55.285847   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:55.285868   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:55.285818   28513 retry.go:31] will retry after 2.811587726s: waiting for machine to come up
	I0819 17:09:58.099399   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:58.099879   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:58.099905   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:58.099837   28513 retry.go:31] will retry after 2.867282911s: waiting for machine to come up
	I0819 17:10:00.970848   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:00.971258   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:10:00.971280   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:10:00.971220   28513 retry.go:31] will retry after 3.969298378s: waiting for machine to come up
	I0819 17:10:04.942401   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:04.942777   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:10:04.942802   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:10:04.942743   28513 retry.go:31] will retry after 5.544139087s: waiting for machine to come up
	I0819 17:10:10.491913   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:10.492372   28158 main.go:141] libmachine: (ha-227346-m02) Found IP for machine: 192.168.39.189
	I0819 17:10:10.492391   28158 main.go:141] libmachine: (ha-227346-m02) Reserving static IP address...
	I0819 17:10:10.492401   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has current primary IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:10.492766   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find host DHCP lease matching {name: "ha-227346-m02", mac: "52:54:00:50:ca:df", ip: "192.168.39.189"} in network mk-ha-227346
	I0819 17:10:10.568180   28158 main.go:141] libmachine: (ha-227346-m02) Reserved static IP address: 192.168.39.189
	I0819 17:10:10.568205   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Getting to WaitForSSH function...
	I0819 17:10:10.568212   28158 main.go:141] libmachine: (ha-227346-m02) Waiting for SSH to be available...
	I0819 17:10:10.570889   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:10.571157   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346
	I0819 17:10:10.571179   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find defined IP address of network mk-ha-227346 interface with MAC address 52:54:00:50:ca:df
	I0819 17:10:10.571304   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Using SSH client type: external
	I0819 17:10:10.571328   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa (-rw-------)
	I0819 17:10:10.571400   28158 main.go:141] libmachine: (ha-227346-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:10:10.571433   28158 main.go:141] libmachine: (ha-227346-m02) DBG | About to run SSH command:
	I0819 17:10:10.571453   28158 main.go:141] libmachine: (ha-227346-m02) DBG | exit 0
	I0819 17:10:10.575374   28158 main.go:141] libmachine: (ha-227346-m02) DBG | SSH cmd err, output: exit status 255: 
	I0819 17:10:10.575401   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 17:10:10.575413   28158 main.go:141] libmachine: (ha-227346-m02) DBG | command : exit 0
	I0819 17:10:10.575421   28158 main.go:141] libmachine: (ha-227346-m02) DBG | err     : exit status 255
	I0819 17:10:10.575432   28158 main.go:141] libmachine: (ha-227346-m02) DBG | output  : 
	I0819 17:10:13.577470   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Getting to WaitForSSH function...
	I0819 17:10:13.579842   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.580251   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:13.580279   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.580397   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Using SSH client type: external
	I0819 17:10:13.580420   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa (-rw-------)
	I0819 17:10:13.580439   28158 main.go:141] libmachine: (ha-227346-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:10:13.580448   28158 main.go:141] libmachine: (ha-227346-m02) DBG | About to run SSH command:
	I0819 17:10:13.580456   28158 main.go:141] libmachine: (ha-227346-m02) DBG | exit 0
	I0819 17:10:13.704776   28158 main.go:141] libmachine: (ha-227346-m02) DBG | SSH cmd err, output: <nil>: 
	I0819 17:10:13.705100   28158 main.go:141] libmachine: (ha-227346-m02) KVM machine creation complete!
	I0819 17:10:13.705424   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetConfigRaw
	I0819 17:10:13.705980   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:13.706159   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:13.706314   28158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 17:10:13.706330   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:10:13.707571   28158 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 17:10:13.707586   28158 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 17:10:13.707594   28158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 17:10:13.707602   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:13.709918   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.710239   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:13.710267   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.710395   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:13.710554   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.710702   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.710857   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:13.711028   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:13.711223   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:13.711235   28158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 17:10:13.815951   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:10:13.815974   28158 main.go:141] libmachine: Detecting the provisioner...
	I0819 17:10:13.815981   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:13.818763   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.819095   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:13.819138   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.819245   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:13.819478   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.819628   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.819756   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:13.819937   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:13.820138   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:13.820149   28158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 17:10:13.925187   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 17:10:13.925286   28158 main.go:141] libmachine: found compatible host: buildroot
	I0819 17:10:13.925301   28158 main.go:141] libmachine: Provisioning with buildroot...
	I0819 17:10:13.925311   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetMachineName
	I0819 17:10:13.925553   28158 buildroot.go:166] provisioning hostname "ha-227346-m02"
	I0819 17:10:13.925592   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetMachineName
	I0819 17:10:13.925779   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:13.928355   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.928693   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:13.928719   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.928902   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:13.929053   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.929193   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.929351   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:13.929546   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:13.929742   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:13.929763   28158 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-227346-m02 && echo "ha-227346-m02" | sudo tee /etc/hostname
	I0819 17:10:14.046025   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-227346-m02
	
	I0819 17:10:14.046048   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.048692   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.049048   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.049073   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.049308   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:14.049483   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.049636   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.049785   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:14.049959   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:14.050116   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:14.050133   28158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-227346-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-227346-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-227346-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:10:14.165466   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:10:14.165498   28158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:10:14.165519   28158 buildroot.go:174] setting up certificates
	I0819 17:10:14.165533   28158 provision.go:84] configureAuth start
	I0819 17:10:14.165545   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetMachineName
	I0819 17:10:14.165830   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:10:14.168646   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.169139   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.169167   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.169453   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.171899   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.172269   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.172289   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.172450   28158 provision.go:143] copyHostCerts
	I0819 17:10:14.172494   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:10:14.172534   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:10:14.172545   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:10:14.172628   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:10:14.172730   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:10:14.172775   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:10:14.172786   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:10:14.172825   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:10:14.172917   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:10:14.172943   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:10:14.172956   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:10:14.173015   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:10:14.173086   28158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.ha-227346-m02 san=[127.0.0.1 192.168.39.189 ha-227346-m02 localhost minikube]
	I0819 17:10:14.404824   28158 provision.go:177] copyRemoteCerts
	I0819 17:10:14.404882   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:10:14.404904   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.407468   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.408000   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.408026   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.408194   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:14.408394   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.408546   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:14.408688   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	I0819 17:10:14.490366   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 17:10:14.490439   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:10:14.512203   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 17:10:14.512269   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:10:14.533541   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 17:10:14.533607   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 17:10:14.555048   28158 provision.go:87] duration metric: took 389.502363ms to configureAuth
	I0819 17:10:14.555077   28158 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:10:14.555276   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:10:14.555378   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.557985   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.558348   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.558371   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.558519   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:14.558726   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.558897   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.559040   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:14.559174   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:14.559361   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:14.559384   28158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:10:14.822319   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:10:14.822349   28158 main.go:141] libmachine: Checking connection to Docker...
	I0819 17:10:14.822360   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetURL
	I0819 17:10:14.823708   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Using libvirt version 6000000
	I0819 17:10:14.825607   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.825994   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.826023   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.826171   28158 main.go:141] libmachine: Docker is up and running!
	I0819 17:10:14.826186   28158 main.go:141] libmachine: Reticulating splines...
	I0819 17:10:14.826194   28158 client.go:171] duration metric: took 28.639543737s to LocalClient.Create
	I0819 17:10:14.826217   28158 start.go:167] duration metric: took 28.639597444s to libmachine.API.Create "ha-227346"
	I0819 17:10:14.826230   28158 start.go:293] postStartSetup for "ha-227346-m02" (driver="kvm2")
	I0819 17:10:14.826241   28158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:10:14.826271   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:14.826457   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:10:14.826481   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.828693   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.829056   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.829082   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.829188   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:14.829359   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.829476   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:14.829603   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	I0819 17:10:14.910460   28158 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:10:14.914512   28158 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:10:14.914540   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:10:14.914619   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:10:14.914692   28158 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:10:14.914701   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /etc/ssl/certs/178372.pem
	I0819 17:10:14.914804   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:10:14.925300   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:10:14.947332   28158 start.go:296] duration metric: took 121.09158ms for postStartSetup
	I0819 17:10:14.947386   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetConfigRaw
	I0819 17:10:14.947931   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:10:14.950477   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.950907   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.950938   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.951165   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:10:14.951391   28158 start.go:128] duration metric: took 28.782699753s to createHost
	I0819 17:10:14.951414   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.953585   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.953904   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.953932   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.954058   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:14.954230   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.954389   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.954524   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:14.954677   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:14.954847   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:14.954859   28158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:10:15.061309   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724087415.043658666
	
	I0819 17:10:15.061332   28158 fix.go:216] guest clock: 1724087415.043658666
	I0819 17:10:15.061342   28158 fix.go:229] Guest: 2024-08-19 17:10:15.043658666 +0000 UTC Remote: 2024-08-19 17:10:14.951405072 +0000 UTC m=+70.948138926 (delta=92.253594ms)
	I0819 17:10:15.061358   28158 fix.go:200] guest clock delta is within tolerance: 92.253594ms
	I0819 17:10:15.061363   28158 start.go:83] releasing machines lock for "ha-227346-m02", held for 28.892778383s
	I0819 17:10:15.061380   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:15.061655   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:10:15.064201   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.064623   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:15.064647   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.066928   28158 out.go:177] * Found network options:
	I0819 17:10:15.068459   28158 out.go:177]   - NO_PROXY=192.168.39.205
	W0819 17:10:15.069697   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 17:10:15.069730   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:15.070207   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:15.070390   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:15.070516   28158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:10:15.070571   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	W0819 17:10:15.070652   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 17:10:15.070726   28158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:10:15.070748   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:15.073465   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.073793   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.073955   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:15.073985   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.074153   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:15.074173   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.074154   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:15.074371   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:15.074450   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:15.074600   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:15.074608   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:15.074740   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:15.074781   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	I0819 17:10:15.074855   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	I0819 17:10:15.314729   28158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:10:15.320614   28158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:10:15.320676   28158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:10:15.335455   28158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 17:10:15.335477   28158 start.go:495] detecting cgroup driver to use...
	I0819 17:10:15.335551   28158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:10:15.349950   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:10:15.362294   28158 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:10:15.362354   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:10:15.374285   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:10:15.386522   28158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:10:15.500254   28158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:10:15.668855   28158 docker.go:233] disabling docker service ...
	I0819 17:10:15.668922   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:10:15.683306   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:10:15.695138   28158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:10:15.806495   28158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:10:15.913086   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:10:15.926950   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:10:15.943526   28158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:10:15.943584   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:15.952925   28158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:10:15.952987   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:15.962238   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:15.971415   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:15.980884   28158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:10:15.990330   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:15.999511   28158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:16.014505   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:16.023612   28158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:10:16.032033   28158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 17:10:16.032091   28158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 17:10:16.043635   28158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:10:16.052831   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:10:16.153853   28158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:10:16.287924   28158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:10:16.287995   28158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:10:16.292630   28158 start.go:563] Will wait 60s for crictl version
	I0819 17:10:16.292679   28158 ssh_runner.go:195] Run: which crictl
	I0819 17:10:16.296008   28158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:10:16.335502   28158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:10:16.335581   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:10:16.362522   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:10:16.395028   28158 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:10:16.396400   28158 out.go:177]   - env NO_PROXY=192.168.39.205
	I0819 17:10:16.397616   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:10:16.400485   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:16.400833   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:16.400855   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:16.401116   28158 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:10:16.404903   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:10:16.417153   28158 mustload.go:65] Loading cluster: ha-227346
	I0819 17:10:16.417360   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:10:16.417719   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:10:16.417750   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:10:16.432463   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35121
	I0819 17:10:16.432873   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:10:16.433379   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:10:16.433402   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:10:16.433722   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:10:16.433899   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:10:16.435405   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:10:16.435779   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:10:16.435808   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:10:16.450412   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0819 17:10:16.450873   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:10:16.451278   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:10:16.451295   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:10:16.451630   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:10:16.451796   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:10:16.451959   28158 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346 for IP: 192.168.39.189
	I0819 17:10:16.451973   28158 certs.go:194] generating shared ca certs ...
	I0819 17:10:16.451993   28158 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:10:16.452138   28158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:10:16.452183   28158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:10:16.452195   28158 certs.go:256] generating profile certs ...
	I0819 17:10:16.452284   28158 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key
	I0819 17:10:16.452339   28158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.57156953
	I0819 17:10:16.452355   28158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.57156953 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205 192.168.39.189 192.168.39.254]
	I0819 17:10:16.554898   28158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.57156953 ...
	I0819 17:10:16.554929   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.57156953: {Name:mk89a7010c986f3cf61c1e174f4fde9f10d23b57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:10:16.555128   28158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.57156953 ...
	I0819 17:10:16.555147   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.57156953: {Name:mk5fa5db66f2352166e304769812bf8b73d24529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:10:16.555243   28158 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.57156953 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt
	I0819 17:10:16.555383   28158 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.57156953 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key
	I0819 17:10:16.555505   28158 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key
	I0819 17:10:16.555520   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 17:10:16.555533   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 17:10:16.555546   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 17:10:16.555561   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 17:10:16.555574   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 17:10:16.555588   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 17:10:16.555600   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 17:10:16.555610   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 17:10:16.555656   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:10:16.555683   28158 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:10:16.555692   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:10:16.555712   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:10:16.555741   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:10:16.555775   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:10:16.555831   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:10:16.555870   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem -> /usr/share/ca-certificates/17837.pem
	I0819 17:10:16.555892   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /usr/share/ca-certificates/178372.pem
	I0819 17:10:16.555910   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:10:16.555948   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:10:16.558824   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:10:16.559224   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:10:16.559244   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:10:16.559482   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:10:16.559696   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:10:16.559884   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:10:16.560038   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:10:16.633132   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 17:10:16.638719   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 17:10:16.649310   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 17:10:16.653594   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 17:10:16.663992   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 17:10:16.667976   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 17:10:16.678384   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 17:10:16.682663   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 17:10:16.692280   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 17:10:16.696445   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 17:10:16.705885   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 17:10:16.709510   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 17:10:16.719350   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:10:16.746516   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:10:16.769754   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:10:16.793189   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:10:16.815706   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 17:10:16.838401   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:10:16.860744   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:10:16.885085   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:10:16.908195   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:10:16.930925   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:10:16.952696   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:10:16.976569   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 17:10:16.991528   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 17:10:17.006563   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 17:10:17.021385   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 17:10:17.036452   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 17:10:17.051348   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 17:10:17.067308   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 17:10:17.084583   28158 ssh_runner.go:195] Run: openssl version
	I0819 17:10:17.090258   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:10:17.100790   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:10:17.105246   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:10:17.105294   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:10:17.111091   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:10:17.121772   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:10:17.132799   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:10:17.137305   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:10:17.137352   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:10:17.142630   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:10:17.152428   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:10:17.162391   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:10:17.166590   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:10:17.166637   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:10:17.171943   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:10:17.183460   28158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:10:17.187758   28158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:10:17.187807   28158 kubeadm.go:934] updating node {m02 192.168.39.189 8443 v1.31.0 crio true true} ...
	I0819 17:10:17.187878   28158 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-227346-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:10:17.187900   28158 kube-vip.go:115] generating kube-vip config ...
	I0819 17:10:17.187931   28158 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 17:10:17.203458   28158 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 17:10:17.203539   28158 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 17:10:17.203597   28158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:10:17.213862   28158 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 17:10:17.213921   28158 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 17:10:17.223785   28158 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 17:10:17.223808   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 17:10:17.223885   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 17:10:17.223888   28158 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 17:10:17.223921   28158 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 17:10:17.227943   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 17:10:17.227966   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 17:10:18.042144   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 17:10:18.042226   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 17:10:18.048250   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 17:10:18.048290   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 17:10:18.177183   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:10:18.215271   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 17:10:18.215370   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 17:10:18.221227   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 17:10:18.221265   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 17:10:18.622342   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 17:10:18.631749   28158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 17:10:18.647663   28158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:10:18.664188   28158 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 17:10:18.681184   28158 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 17:10:18.684885   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:10:18.696116   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:10:18.813538   28158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:10:18.832105   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:10:18.832448   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:10:18.832497   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:10:18.847682   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I0819 17:10:18.848098   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:10:18.848538   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:10:18.848561   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:10:18.848869   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:10:18.849075   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:10:18.849201   28158 start.go:317] joinCluster: &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:10:18.849320   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 17:10:18.849344   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:10:18.852504   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:10:18.852978   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:10:18.853003   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:10:18.853160   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:10:18.853361   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:10:18.853535   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:10:18.853696   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:10:18.999093   28158 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:10:18.999140   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dbt7f7.h17s4g2mjf3dg3ww --discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-227346-m02 --control-plane --apiserver-advertise-address=192.168.39.189 --apiserver-bind-port=8443"
	I0819 17:10:41.001845   28158 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dbt7f7.h17s4g2mjf3dg3ww --discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-227346-m02 --control-plane --apiserver-advertise-address=192.168.39.189 --apiserver-bind-port=8443": (22.002675591s)
	I0819 17:10:41.001879   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 17:10:41.465428   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-227346-m02 minikube.k8s.io/updated_at=2024_08_19T17_10_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-227346 minikube.k8s.io/primary=false
	I0819 17:10:41.592904   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-227346-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 17:10:41.715661   28158 start.go:319] duration metric: took 22.866456336s to joinCluster
	I0819 17:10:41.715746   28158 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:10:41.716061   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:10:41.717273   28158 out.go:177] * Verifying Kubernetes components...
	I0819 17:10:41.718628   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:10:41.969118   28158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:10:41.997090   28158 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:10:41.997406   28158 kapi.go:59] client config for ha-227346: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt", KeyFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key", CAFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 17:10:41.997494   28158 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.205:8443
	I0819 17:10:41.997757   28158 node_ready.go:35] waiting up to 6m0s for node "ha-227346-m02" to be "Ready" ...
	I0819 17:10:41.997867   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:41.997878   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:41.997889   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:41.997896   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:42.018719   28158 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0819 17:10:42.498706   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:42.498731   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:42.498742   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:42.498748   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:42.503472   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:10:42.998305   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:42.998328   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:42.998337   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:42.998342   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:43.002110   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:43.497971   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:43.497993   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:43.498004   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:43.498009   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:43.504416   28158 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 17:10:43.998737   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:43.998766   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:43.998778   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:43.998784   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:44.004655   28158 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 17:10:44.005243   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:44.498460   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:44.498497   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:44.498506   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:44.498510   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:44.502097   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:44.998098   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:44.998124   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:44.998136   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:44.998143   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:45.002041   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:45.498316   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:45.498338   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:45.498349   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:45.498354   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:45.502591   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:10:45.998601   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:45.998625   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:45.998633   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:45.998637   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:46.001767   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:46.498824   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:46.498848   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:46.498859   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:46.498867   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:46.503050   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:10:46.503843   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:46.998034   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:46.998055   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:46.998063   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:46.998067   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:47.001237   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:47.498117   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:47.498142   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:47.498149   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:47.498154   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:47.501279   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:47.998889   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:47.998911   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:47.998919   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:47.998923   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:48.002034   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:48.497954   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:48.497978   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:48.497986   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:48.497990   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:48.501461   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:48.997984   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:48.998009   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:48.998020   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:48.998028   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:49.009078   28158 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0819 17:10:49.009709   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:49.498580   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:49.498602   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:49.498609   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:49.498613   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:49.501899   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:49.997947   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:49.997973   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:49.997985   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:49.997990   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:50.002900   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:10:50.498790   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:50.498814   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:50.498825   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:50.498834   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:50.502115   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:50.998060   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:50.998084   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:50.998092   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:50.998096   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:51.001338   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:51.498702   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:51.498724   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:51.498732   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:51.498736   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:51.501967   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:51.502631   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:51.998953   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:51.998980   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:51.998990   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:51.998993   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:52.002432   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:52.498310   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:52.498335   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:52.498350   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:52.498356   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:52.501524   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:52.998631   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:52.998654   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:52.998661   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:52.998664   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:53.002129   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:53.498111   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:53.498133   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:53.498142   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:53.498145   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:53.501106   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:10:53.998417   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:53.998442   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:53.998450   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:53.998454   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:54.001730   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:54.002348   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:54.498533   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:54.498559   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:54.498568   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:54.498572   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:54.501740   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:54.998768   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:54.998795   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:54.998806   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:54.998812   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:55.002045   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:55.498562   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:55.498586   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:55.498594   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:55.498598   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:55.501665   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:55.998689   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:55.998712   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:55.998720   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:55.998725   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:56.002395   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:56.003190   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:56.498884   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:56.498901   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:56.498909   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:56.498916   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:56.502288   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:56.998395   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:56.998418   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:56.998426   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:56.998430   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:57.001613   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:57.498638   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:57.498661   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:57.498669   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:57.498674   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:57.502084   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:57.997914   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:57.997934   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:57.997940   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:57.997944   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:58.000845   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:10:58.498823   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:58.498847   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:58.498857   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:58.498862   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:58.501955   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:58.502615   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:58.998498   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:58.998518   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:58.998526   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:58.998531   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:59.001617   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:59.498169   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:59.498192   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:59.498205   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:59.498209   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:59.501467   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:59.998446   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:59.998469   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:59.998477   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:59.998480   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.001728   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:00.498624   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:00.498641   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.498648   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.498652   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.501614   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.502154   28158 node_ready.go:49] node "ha-227346-m02" has status "Ready":"True"
	I0819 17:11:00.502172   28158 node_ready.go:38] duration metric: took 18.504391343s for node "ha-227346-m02" to be "Ready" ...
	I0819 17:11:00.502182   28158 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:11:00.502285   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:11:00.502296   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.502306   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.502312   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.506087   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:00.513482   28158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.513559   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-9s77g
	I0819 17:11:00.513569   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.513579   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.513588   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.515942   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.516544   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:00.516557   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.516567   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.516572   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.518878   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.519354   28158 pod_ready.go:93] pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:00.519376   28158 pod_ready.go:82] duration metric: took 5.867708ms for pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.519389   28158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.519447   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-r68td
	I0819 17:11:00.519455   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.519462   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.519470   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.521900   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.522627   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:00.522642   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.522651   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.522656   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.524800   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.525352   28158 pod_ready.go:93] pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:00.525376   28158 pod_ready.go:82] duration metric: took 5.968846ms for pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.525388   28158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.525449   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346
	I0819 17:11:00.525459   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.525469   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.525480   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.527626   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.528068   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:00.528082   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.528089   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.528092   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.530155   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.530632   28158 pod_ready.go:93] pod "etcd-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:00.530646   28158 pod_ready.go:82] duration metric: took 5.247627ms for pod "etcd-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.530654   28158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.530705   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346-m02
	I0819 17:11:00.530713   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.530719   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.530725   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.532669   28158 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 17:11:00.533187   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:00.533201   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.533211   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.533217   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.535027   28158 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 17:11:00.535499   28158 pod_ready.go:93] pod "etcd-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:00.535513   28158 pod_ready.go:82] duration metric: took 4.853299ms for pod "etcd-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.535525   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.698905   28158 request.go:632] Waited for 163.321682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346
	I0819 17:11:00.698978   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346
	I0819 17:11:00.698983   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.698993   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.699001   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.702229   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:00.899354   28158 request.go:632] Waited for 196.3754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:00.899411   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:00.899416   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.899433   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.899451   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.903052   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:00.903575   28158 pod_ready.go:93] pod "kube-apiserver-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:00.903590   28158 pod_ready.go:82] duration metric: took 368.059975ms for pod "kube-apiserver-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.903608   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:01.098874   28158 request.go:632] Waited for 195.200511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m02
	I0819 17:11:01.098947   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m02
	I0819 17:11:01.098952   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:01.098960   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:01.098968   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:01.102418   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:01.299539   28158 request.go:632] Waited for 196.428899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:01.299627   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:01.299639   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:01.299647   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:01.299652   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:01.302808   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:01.303240   28158 pod_ready.go:93] pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:01.303258   28158 pod_ready.go:82] duration metric: took 399.642843ms for pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:01.303267   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:01.499450   28158 request.go:632] Waited for 196.101424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346
	I0819 17:11:01.499502   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346
	I0819 17:11:01.499507   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:01.499514   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:01.499519   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:01.503016   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:01.699168   28158 request.go:632] Waited for 195.362344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:01.699247   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:01.699254   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:01.699314   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:01.699330   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:01.702600   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:01.703116   28158 pod_ready.go:93] pod "kube-controller-manager-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:01.703135   28158 pod_ready.go:82] duration metric: took 399.862476ms for pod "kube-controller-manager-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:01.703145   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:01.899255   28158 request.go:632] Waited for 196.044062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m02
	I0819 17:11:01.899335   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m02
	I0819 17:11:01.899346   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:01.899359   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:01.899376   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:01.902884   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:02.099095   28158 request.go:632] Waited for 195.34707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:02.099163   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:02.099169   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:02.099176   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:02.099181   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:02.102634   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:02.103074   28158 pod_ready.go:93] pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:02.103093   28158 pod_ready.go:82] duration metric: took 399.942297ms for pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:02.103103   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6lhlp" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:02.299277   28158 request.go:632] Waited for 196.111667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lhlp
	I0819 17:11:02.299333   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lhlp
	I0819 17:11:02.299338   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:02.299347   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:02.299350   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:02.302630   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:02.499585   28158 request.go:632] Waited for 196.381609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:02.499642   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:02.499647   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:02.499654   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:02.499658   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:02.502762   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:02.503444   28158 pod_ready.go:93] pod "kube-proxy-6lhlp" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:02.503468   28158 pod_ready.go:82] duration metric: took 400.355898ms for pod "kube-proxy-6lhlp" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:02.503480   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9xpm4" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:02.699428   28158 request.go:632] Waited for 195.87825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xpm4
	I0819 17:11:02.699509   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xpm4
	I0819 17:11:02.699517   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:02.699525   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:02.699529   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:02.703997   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:11:02.899114   28158 request.go:632] Waited for 194.378055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:02.899179   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:02.899188   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:02.899199   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:02.899215   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:02.902614   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:02.903078   28158 pod_ready.go:93] pod "kube-proxy-9xpm4" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:02.903138   28158 pod_ready.go:82] duration metric: took 399.606177ms for pod "kube-proxy-9xpm4" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:02.903157   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:03.099330   28158 request.go:632] Waited for 196.104442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346
	I0819 17:11:03.099431   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346
	I0819 17:11:03.099443   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.099454   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.099461   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.103207   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:03.299483   28158 request.go:632] Waited for 195.412597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:03.299551   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:03.299560   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.299585   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.299607   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.302800   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:03.303465   28158 pod_ready.go:93] pod "kube-scheduler-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:03.303484   28158 pod_ready.go:82] duration metric: took 400.318392ms for pod "kube-scheduler-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:03.303497   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:03.499637   28158 request.go:632] Waited for 196.072281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m02
	I0819 17:11:03.499711   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m02
	I0819 17:11:03.499717   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.499724   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.499728   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.502937   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:03.698798   28158 request.go:632] Waited for 195.290311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:03.698880   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:03.698887   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.698894   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.698902   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.702079   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:03.702775   28158 pod_ready.go:93] pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:03.702792   28158 pod_ready.go:82] duration metric: took 399.285458ms for pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:03.702803   28158 pod_ready.go:39] duration metric: took 3.200583312s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:11:03.702815   28158 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:11:03.702862   28158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:11:03.717359   28158 api_server.go:72] duration metric: took 22.001580434s to wait for apiserver process to appear ...
	I0819 17:11:03.717390   28158 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:11:03.717410   28158 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0819 17:11:03.722002   28158 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0819 17:11:03.722070   28158 round_trippers.go:463] GET https://192.168.39.205:8443/version
	I0819 17:11:03.722081   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.722091   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.722099   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.722965   28158 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 17:11:03.723083   28158 api_server.go:141] control plane version: v1.31.0
	I0819 17:11:03.723100   28158 api_server.go:131] duration metric: took 5.703682ms to wait for apiserver health ...
	I0819 17:11:03.723108   28158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:11:03.899648   28158 request.go:632] Waited for 176.468967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:11:03.899727   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:11:03.899735   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.899749   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.899757   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.904179   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:11:03.909765   28158 system_pods.go:59] 17 kube-system pods found
	I0819 17:11:03.909792   28158 system_pods.go:61] "coredns-6f6b679f8f-9s77g" [28ea7cc3-2a78-4b29-82c7-7a028d357471] Running
	I0819 17:11:03.909796   28158 system_pods.go:61] "coredns-6f6b679f8f-r68td" [e48b2c24-94f7-4ca4-8f99-420706cd0cb3] Running
	I0819 17:11:03.909800   28158 system_pods.go:61] "etcd-ha-227346" [b9aafb60-6b1d-4248-8f98-d01e70c686d5] Running
	I0819 17:11:03.909804   28158 system_pods.go:61] "etcd-ha-227346-m02" [f1063ae2-aa38-45f3-836e-656af135b070] Running
	I0819 17:11:03.909807   28158 system_pods.go:61] "kindnet-lwjmd" [55731455-5f1e-4499-ae63-a8ad06f5553f] Running
	I0819 17:11:03.909811   28158 system_pods.go:61] "kindnet-mk55z" [74059a09-c1fc-4d9f-a890-1f8bfa8fff1b] Running
	I0819 17:11:03.909814   28158 system_pods.go:61] "kube-apiserver-ha-227346" [31f54ee5-872e-42cc-88b9-19a4d827370f] Running
	I0819 17:11:03.909817   28158 system_pods.go:61] "kube-apiserver-ha-227346-m02" [d5a4d799-e30b-4614-9d15-4312f9842feb] Running
	I0819 17:11:03.909821   28158 system_pods.go:61] "kube-controller-manager-ha-227346" [a9cc90a6-4c65-4606-8dd0-f38d3910ea72] Running
	I0819 17:11:03.909825   28158 system_pods.go:61] "kube-controller-manager-ha-227346-m02" [ecd218d3-0756-49e0-8f38-d12e877f807b] Running
	I0819 17:11:03.909828   28158 system_pods.go:61] "kube-proxy-6lhlp" [59fb9bb4-5dc4-421b-a00f-941b25c16b20] Running
	I0819 17:11:03.909832   28158 system_pods.go:61] "kube-proxy-9xpm4" [56e3f9ad-e32e-4a45-9184-72fd5076b2f7] Running
	I0819 17:11:03.909835   28158 system_pods.go:61] "kube-scheduler-ha-227346" [35d1cd4d-b090-4459-9872-e6f669045e2b] Running
	I0819 17:11:03.909838   28158 system_pods.go:61] "kube-scheduler-ha-227346-m02" [4b6874b9-1d5d-4b3b-8d4b-022ef04c6a3e] Running
	I0819 17:11:03.909841   28158 system_pods.go:61] "kube-vip-ha-227346" [0f27551d-8d73-4f32-8f52-048bb3dfa992] Running
	I0819 17:11:03.909844   28158 system_pods.go:61] "kube-vip-ha-227346-m02" [ecaf3a81-a97d-42a0-b35d-c3c6c84efb21] Running
	I0819 17:11:03.909847   28158 system_pods.go:61] "storage-provisioner" [f4ed502e-5b16-4a13-9e5f-c1d271bea40b] Running
	I0819 17:11:03.909855   28158 system_pods.go:74] duration metric: took 186.742562ms to wait for pod list to return data ...
	I0819 17:11:03.909862   28158 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:11:04.099680   28158 request.go:632] Waited for 189.755136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/default/serviceaccounts
	I0819 17:11:04.099732   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/default/serviceaccounts
	I0819 17:11:04.099737   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:04.099744   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:04.099749   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:04.103334   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:04.103566   28158 default_sa.go:45] found service account: "default"
	I0819 17:11:04.103583   28158 default_sa.go:55] duration metric: took 193.71521ms for default service account to be created ...
	I0819 17:11:04.103593   28158 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:11:04.299116   28158 request.go:632] Waited for 195.437455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:11:04.299188   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:11:04.299195   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:04.299203   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:04.299216   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:04.303053   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:04.308059   28158 system_pods.go:86] 17 kube-system pods found
	I0819 17:11:04.308087   28158 system_pods.go:89] "coredns-6f6b679f8f-9s77g" [28ea7cc3-2a78-4b29-82c7-7a028d357471] Running
	I0819 17:11:04.308093   28158 system_pods.go:89] "coredns-6f6b679f8f-r68td" [e48b2c24-94f7-4ca4-8f99-420706cd0cb3] Running
	I0819 17:11:04.308097   28158 system_pods.go:89] "etcd-ha-227346" [b9aafb60-6b1d-4248-8f98-d01e70c686d5] Running
	I0819 17:11:04.308101   28158 system_pods.go:89] "etcd-ha-227346-m02" [f1063ae2-aa38-45f3-836e-656af135b070] Running
	I0819 17:11:04.308105   28158 system_pods.go:89] "kindnet-lwjmd" [55731455-5f1e-4499-ae63-a8ad06f5553f] Running
	I0819 17:11:04.308108   28158 system_pods.go:89] "kindnet-mk55z" [74059a09-c1fc-4d9f-a890-1f8bfa8fff1b] Running
	I0819 17:11:04.308113   28158 system_pods.go:89] "kube-apiserver-ha-227346" [31f54ee5-872e-42cc-88b9-19a4d827370f] Running
	I0819 17:11:04.308117   28158 system_pods.go:89] "kube-apiserver-ha-227346-m02" [d5a4d799-e30b-4614-9d15-4312f9842feb] Running
	I0819 17:11:04.308121   28158 system_pods.go:89] "kube-controller-manager-ha-227346" [a9cc90a6-4c65-4606-8dd0-f38d3910ea72] Running
	I0819 17:11:04.308124   28158 system_pods.go:89] "kube-controller-manager-ha-227346-m02" [ecd218d3-0756-49e0-8f38-d12e877f807b] Running
	I0819 17:11:04.308127   28158 system_pods.go:89] "kube-proxy-6lhlp" [59fb9bb4-5dc4-421b-a00f-941b25c16b20] Running
	I0819 17:11:04.308131   28158 system_pods.go:89] "kube-proxy-9xpm4" [56e3f9ad-e32e-4a45-9184-72fd5076b2f7] Running
	I0819 17:11:04.308134   28158 system_pods.go:89] "kube-scheduler-ha-227346" [35d1cd4d-b090-4459-9872-e6f669045e2b] Running
	I0819 17:11:04.308137   28158 system_pods.go:89] "kube-scheduler-ha-227346-m02" [4b6874b9-1d5d-4b3b-8d4b-022ef04c6a3e] Running
	I0819 17:11:04.308140   28158 system_pods.go:89] "kube-vip-ha-227346" [0f27551d-8d73-4f32-8f52-048bb3dfa992] Running
	I0819 17:11:04.308144   28158 system_pods.go:89] "kube-vip-ha-227346-m02" [ecaf3a81-a97d-42a0-b35d-c3c6c84efb21] Running
	I0819 17:11:04.308147   28158 system_pods.go:89] "storage-provisioner" [f4ed502e-5b16-4a13-9e5f-c1d271bea40b] Running
	I0819 17:11:04.308153   28158 system_pods.go:126] duration metric: took 204.542478ms to wait for k8s-apps to be running ...
	I0819 17:11:04.308162   28158 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:11:04.308204   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:11:04.324047   28158 system_svc.go:56] duration metric: took 15.875431ms WaitForService to wait for kubelet
	I0819 17:11:04.324083   28158 kubeadm.go:582] duration metric: took 22.608307073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:11:04.324105   28158 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:11:04.499529   28158 request.go:632] Waited for 175.342422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes
	I0819 17:11:04.499596   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes
	I0819 17:11:04.499609   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:04.499617   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:04.499621   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:04.503453   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:04.504089   28158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:11:04.504111   28158 node_conditions.go:123] node cpu capacity is 2
	I0819 17:11:04.504122   28158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:11:04.504126   28158 node_conditions.go:123] node cpu capacity is 2
	I0819 17:11:04.504131   28158 node_conditions.go:105] duration metric: took 180.020079ms to run NodePressure ...
	I0819 17:11:04.504143   28158 start.go:241] waiting for startup goroutines ...
	I0819 17:11:04.504173   28158 start.go:255] writing updated cluster config ...
	I0819 17:11:04.506186   28158 out.go:201] 
	I0819 17:11:04.507575   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:11:04.507676   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:11:04.509144   28158 out.go:177] * Starting "ha-227346-m03" control-plane node in "ha-227346" cluster
	I0819 17:11:04.510145   28158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:11:04.510164   28158 cache.go:56] Caching tarball of preloaded images
	I0819 17:11:04.510253   28158 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:11:04.510264   28158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:11:04.510345   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:11:04.510515   28158 start.go:360] acquireMachinesLock for ha-227346-m03: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:11:04.510555   28158 start.go:364] duration metric: took 22.476µs to acquireMachinesLock for "ha-227346-m03"
	I0819 17:11:04.510572   28158 start.go:93] Provisioning new machine with config: &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:11:04.510664   28158 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0819 17:11:04.512151   28158 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 17:11:04.512219   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:11:04.512249   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:11:04.527050   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36431
	I0819 17:11:04.527528   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:11:04.527955   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:11:04.527976   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:11:04.528289   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:11:04.528487   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetMachineName
	I0819 17:11:04.528677   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:04.528837   28158 start.go:159] libmachine.API.Create for "ha-227346" (driver="kvm2")
	I0819 17:11:04.528860   28158 client.go:168] LocalClient.Create starting
	I0819 17:11:04.528894   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 17:11:04.528931   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:11:04.528948   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:11:04.529013   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 17:11:04.529036   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:11:04.529046   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:11:04.529070   28158 main.go:141] libmachine: Running pre-create checks...
	I0819 17:11:04.529083   28158 main.go:141] libmachine: (ha-227346-m03) Calling .PreCreateCheck
	I0819 17:11:04.529286   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetConfigRaw
	I0819 17:11:04.529646   28158 main.go:141] libmachine: Creating machine...
	I0819 17:11:04.529660   28158 main.go:141] libmachine: (ha-227346-m03) Calling .Create
	I0819 17:11:04.529777   28158 main.go:141] libmachine: (ha-227346-m03) Creating KVM machine...
	I0819 17:11:04.530855   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found existing default KVM network
	I0819 17:11:04.530938   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found existing private KVM network mk-ha-227346
	I0819 17:11:04.531058   28158 main.go:141] libmachine: (ha-227346-m03) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03 ...
	I0819 17:11:04.531080   28158 main.go:141] libmachine: (ha-227346-m03) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 17:11:04.531136   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:04.531043   28924 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:11:04.531228   28158 main.go:141] libmachine: (ha-227346-m03) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 17:11:04.755830   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:04.755704   28924 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa...
	I0819 17:11:04.872298   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:04.872159   28924 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/ha-227346-m03.rawdisk...
	I0819 17:11:04.872340   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Writing magic tar header
	I0819 17:11:04.872357   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Writing SSH key tar header
	I0819 17:11:04.872382   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:04.872324   28924 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03 ...
	I0819 17:11:04.872510   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03
	I0819 17:11:04.872530   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03 (perms=drwx------)
	I0819 17:11:04.872538   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 17:11:04.872553   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:11:04.872564   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 17:11:04.872584   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 17:11:04.872596   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 17:11:04.872601   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins
	I0819 17:11:04.872611   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 17:11:04.872623   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 17:11:04.872636   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home
	I0819 17:11:04.872649   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 17:11:04.872660   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 17:11:04.872671   28158 main.go:141] libmachine: (ha-227346-m03) Creating domain...
	I0819 17:11:04.872682   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Skipping /home - not owner
	I0819 17:11:04.873536   28158 main.go:141] libmachine: (ha-227346-m03) define libvirt domain using xml: 
	I0819 17:11:04.873558   28158 main.go:141] libmachine: (ha-227346-m03) <domain type='kvm'>
	I0819 17:11:04.873569   28158 main.go:141] libmachine: (ha-227346-m03)   <name>ha-227346-m03</name>
	I0819 17:11:04.873575   28158 main.go:141] libmachine: (ha-227346-m03)   <memory unit='MiB'>2200</memory>
	I0819 17:11:04.873586   28158 main.go:141] libmachine: (ha-227346-m03)   <vcpu>2</vcpu>
	I0819 17:11:04.873600   28158 main.go:141] libmachine: (ha-227346-m03)   <features>
	I0819 17:11:04.873607   28158 main.go:141] libmachine: (ha-227346-m03)     <acpi/>
	I0819 17:11:04.873612   28158 main.go:141] libmachine: (ha-227346-m03)     <apic/>
	I0819 17:11:04.873624   28158 main.go:141] libmachine: (ha-227346-m03)     <pae/>
	I0819 17:11:04.873634   28158 main.go:141] libmachine: (ha-227346-m03)     
	I0819 17:11:04.873661   28158 main.go:141] libmachine: (ha-227346-m03)   </features>
	I0819 17:11:04.873681   28158 main.go:141] libmachine: (ha-227346-m03)   <cpu mode='host-passthrough'>
	I0819 17:11:04.873691   28158 main.go:141] libmachine: (ha-227346-m03)   
	I0819 17:11:04.873700   28158 main.go:141] libmachine: (ha-227346-m03)   </cpu>
	I0819 17:11:04.873710   28158 main.go:141] libmachine: (ha-227346-m03)   <os>
	I0819 17:11:04.873720   28158 main.go:141] libmachine: (ha-227346-m03)     <type>hvm</type>
	I0819 17:11:04.873733   28158 main.go:141] libmachine: (ha-227346-m03)     <boot dev='cdrom'/>
	I0819 17:11:04.873743   28158 main.go:141] libmachine: (ha-227346-m03)     <boot dev='hd'/>
	I0819 17:11:04.873752   28158 main.go:141] libmachine: (ha-227346-m03)     <bootmenu enable='no'/>
	I0819 17:11:04.873761   28158 main.go:141] libmachine: (ha-227346-m03)   </os>
	I0819 17:11:04.873770   28158 main.go:141] libmachine: (ha-227346-m03)   <devices>
	I0819 17:11:04.873781   28158 main.go:141] libmachine: (ha-227346-m03)     <disk type='file' device='cdrom'>
	I0819 17:11:04.873799   28158 main.go:141] libmachine: (ha-227346-m03)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/boot2docker.iso'/>
	I0819 17:11:04.873816   28158 main.go:141] libmachine: (ha-227346-m03)       <target dev='hdc' bus='scsi'/>
	I0819 17:11:04.873826   28158 main.go:141] libmachine: (ha-227346-m03)       <readonly/>
	I0819 17:11:04.873834   28158 main.go:141] libmachine: (ha-227346-m03)     </disk>
	I0819 17:11:04.873845   28158 main.go:141] libmachine: (ha-227346-m03)     <disk type='file' device='disk'>
	I0819 17:11:04.873861   28158 main.go:141] libmachine: (ha-227346-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 17:11:04.873908   28158 main.go:141] libmachine: (ha-227346-m03)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/ha-227346-m03.rawdisk'/>
	I0819 17:11:04.873919   28158 main.go:141] libmachine: (ha-227346-m03)       <target dev='hda' bus='virtio'/>
	I0819 17:11:04.873925   28158 main.go:141] libmachine: (ha-227346-m03)     </disk>
	I0819 17:11:04.873946   28158 main.go:141] libmachine: (ha-227346-m03)     <interface type='network'>
	I0819 17:11:04.873962   28158 main.go:141] libmachine: (ha-227346-m03)       <source network='mk-ha-227346'/>
	I0819 17:11:04.873970   28158 main.go:141] libmachine: (ha-227346-m03)       <model type='virtio'/>
	I0819 17:11:04.873976   28158 main.go:141] libmachine: (ha-227346-m03)     </interface>
	I0819 17:11:04.873985   28158 main.go:141] libmachine: (ha-227346-m03)     <interface type='network'>
	I0819 17:11:04.873995   28158 main.go:141] libmachine: (ha-227346-m03)       <source network='default'/>
	I0819 17:11:04.874008   28158 main.go:141] libmachine: (ha-227346-m03)       <model type='virtio'/>
	I0819 17:11:04.874021   28158 main.go:141] libmachine: (ha-227346-m03)     </interface>
	I0819 17:11:04.874057   28158 main.go:141] libmachine: (ha-227346-m03)     <serial type='pty'>
	I0819 17:11:04.874089   28158 main.go:141] libmachine: (ha-227346-m03)       <target port='0'/>
	I0819 17:11:04.874105   28158 main.go:141] libmachine: (ha-227346-m03)     </serial>
	I0819 17:11:04.874116   28158 main.go:141] libmachine: (ha-227346-m03)     <console type='pty'>
	I0819 17:11:04.874129   28158 main.go:141] libmachine: (ha-227346-m03)       <target type='serial' port='0'/>
	I0819 17:11:04.874139   28158 main.go:141] libmachine: (ha-227346-m03)     </console>
	I0819 17:11:04.874150   28158 main.go:141] libmachine: (ha-227346-m03)     <rng model='virtio'>
	I0819 17:11:04.874166   28158 main.go:141] libmachine: (ha-227346-m03)       <backend model='random'>/dev/random</backend>
	I0819 17:11:04.874185   28158 main.go:141] libmachine: (ha-227346-m03)     </rng>
	I0819 17:11:04.874203   28158 main.go:141] libmachine: (ha-227346-m03)     
	I0819 17:11:04.874219   28158 main.go:141] libmachine: (ha-227346-m03)     
	I0819 17:11:04.874229   28158 main.go:141] libmachine: (ha-227346-m03)   </devices>
	I0819 17:11:04.874242   28158 main.go:141] libmachine: (ha-227346-m03) </domain>
	I0819 17:11:04.874250   28158 main.go:141] libmachine: (ha-227346-m03) 
	I0819 17:11:04.880861   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:55:cd:c0 in network default
	I0819 17:11:04.881422   28158 main.go:141] libmachine: (ha-227346-m03) Ensuring networks are active...
	I0819 17:11:04.881441   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:04.882176   28158 main.go:141] libmachine: (ha-227346-m03) Ensuring network default is active
	I0819 17:11:04.882447   28158 main.go:141] libmachine: (ha-227346-m03) Ensuring network mk-ha-227346 is active
	I0819 17:11:04.882807   28158 main.go:141] libmachine: (ha-227346-m03) Getting domain xml...
	I0819 17:11:04.883659   28158 main.go:141] libmachine: (ha-227346-m03) Creating domain...
	I0819 17:11:06.122917   28158 main.go:141] libmachine: (ha-227346-m03) Waiting to get IP...
	I0819 17:11:06.123667   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:06.124078   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:06.124129   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:06.124069   28924 retry.go:31] will retry after 273.06976ms: waiting for machine to come up
	I0819 17:11:06.398662   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:06.399173   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:06.399204   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:06.399134   28924 retry.go:31] will retry after 366.928672ms: waiting for machine to come up
	I0819 17:11:06.767695   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:06.768082   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:06.768114   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:06.768030   28924 retry.go:31] will retry after 471.347113ms: waiting for machine to come up
	I0819 17:11:07.240569   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:07.241136   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:07.241163   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:07.241101   28924 retry.go:31] will retry after 537.842776ms: waiting for machine to come up
	I0819 17:11:07.780975   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:07.781443   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:07.781498   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:07.781419   28924 retry.go:31] will retry after 459.754858ms: waiting for machine to come up
	I0819 17:11:08.243095   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:08.243527   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:08.243550   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:08.243481   28924 retry.go:31] will retry after 601.291451ms: waiting for machine to come up
	I0819 17:11:08.846140   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:08.846555   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:08.846581   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:08.846507   28924 retry.go:31] will retry after 924.867302ms: waiting for machine to come up
	I0819 17:11:09.772643   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:09.773162   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:09.773198   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:09.773119   28924 retry.go:31] will retry after 1.203805195s: waiting for machine to come up
	I0819 17:11:10.978982   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:10.979464   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:10.979486   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:10.979427   28924 retry.go:31] will retry after 1.337086668s: waiting for machine to come up
	I0819 17:11:12.317717   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:12.318172   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:12.318199   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:12.318133   28924 retry.go:31] will retry after 1.894350017s: waiting for machine to come up
	I0819 17:11:14.214577   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:14.215034   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:14.215108   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:14.215008   28924 retry.go:31] will retry after 2.066719812s: waiting for machine to come up
	I0819 17:11:16.283726   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:16.284144   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:16.284165   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:16.284107   28924 retry.go:31] will retry after 3.274271926s: waiting for machine to come up
	I0819 17:11:19.559337   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:19.559703   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:19.559726   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:19.559661   28924 retry.go:31] will retry after 4.33036353s: waiting for machine to come up
	I0819 17:11:23.894798   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:23.895283   28158 main.go:141] libmachine: (ha-227346-m03) Found IP for machine: 192.168.39.95
	I0819 17:11:23.895309   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has current primary IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:23.895320   28158 main.go:141] libmachine: (ha-227346-m03) Reserving static IP address...
	I0819 17:11:23.895646   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find host DHCP lease matching {name: "ha-227346-m03", mac: "52:54:00:9c:a7:7a", ip: "192.168.39.95"} in network mk-ha-227346
	I0819 17:11:23.970223   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Getting to WaitForSSH function...
	I0819 17:11:23.970255   28158 main.go:141] libmachine: (ha-227346-m03) Reserved static IP address: 192.168.39.95
	I0819 17:11:23.970269   28158 main.go:141] libmachine: (ha-227346-m03) Waiting for SSH to be available...
	I0819 17:11:23.972464   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:23.972812   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346
	I0819 17:11:23.972838   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find defined IP address of network mk-ha-227346 interface with MAC address 52:54:00:9c:a7:7a
	I0819 17:11:23.972973   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Using SSH client type: external
	I0819 17:11:23.972999   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa (-rw-------)
	I0819 17:11:23.973028   28158 main.go:141] libmachine: (ha-227346-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:11:23.973042   28158 main.go:141] libmachine: (ha-227346-m03) DBG | About to run SSH command:
	I0819 17:11:23.973061   28158 main.go:141] libmachine: (ha-227346-m03) DBG | exit 0
	I0819 17:11:23.976368   28158 main.go:141] libmachine: (ha-227346-m03) DBG | SSH cmd err, output: exit status 255: 
	I0819 17:11:23.976395   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 17:11:23.976402   28158 main.go:141] libmachine: (ha-227346-m03) DBG | command : exit 0
	I0819 17:11:23.976407   28158 main.go:141] libmachine: (ha-227346-m03) DBG | err     : exit status 255
	I0819 17:11:23.976415   28158 main.go:141] libmachine: (ha-227346-m03) DBG | output  : 
	I0819 17:11:26.978531   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Getting to WaitForSSH function...
	I0819 17:11:26.981090   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:26.981502   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:26.981530   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:26.981647   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Using SSH client type: external
	I0819 17:11:26.981676   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa (-rw-------)
	I0819 17:11:26.981719   28158 main.go:141] libmachine: (ha-227346-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:11:26.981742   28158 main.go:141] libmachine: (ha-227346-m03) DBG | About to run SSH command:
	I0819 17:11:26.981773   28158 main.go:141] libmachine: (ha-227346-m03) DBG | exit 0
	I0819 17:11:27.104490   28158 main.go:141] libmachine: (ha-227346-m03) DBG | SSH cmd err, output: <nil>: 
	I0819 17:11:27.104709   28158 main.go:141] libmachine: (ha-227346-m03) KVM machine creation complete!
	I0819 17:11:27.105021   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetConfigRaw
	I0819 17:11:27.105548   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:27.105770   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:27.105906   28158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 17:11:27.105917   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:11:27.107064   28158 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 17:11:27.107078   28158 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 17:11:27.107083   28158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 17:11:27.107090   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.109178   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.109537   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.109559   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.109754   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.109922   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.110064   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.110202   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.110340   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:27.110527   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:27.110537   28158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 17:11:27.211831   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:11:27.211858   28158 main.go:141] libmachine: Detecting the provisioner...
	I0819 17:11:27.211869   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.214484   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.214860   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.214883   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.215082   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.215270   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.215403   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.215517   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.215658   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:27.215852   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:27.215866   28158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 17:11:27.316843   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 17:11:27.316915   28158 main.go:141] libmachine: found compatible host: buildroot
	I0819 17:11:27.316926   28158 main.go:141] libmachine: Provisioning with buildroot...
	I0819 17:11:27.316937   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetMachineName
	I0819 17:11:27.317178   28158 buildroot.go:166] provisioning hostname "ha-227346-m03"
	I0819 17:11:27.317202   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetMachineName
	I0819 17:11:27.317362   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.319777   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.320082   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.320104   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.320215   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.320404   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.320573   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.320692   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.320840   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:27.321003   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:27.321015   28158 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-227346-m03 && echo "ha-227346-m03" | sudo tee /etc/hostname
	I0819 17:11:27.440791   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-227346-m03
	
	I0819 17:11:27.440819   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.443593   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.443926   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.443953   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.444162   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.444382   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.444543   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.444686   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.444854   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:27.445019   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:27.445048   28158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-227346-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-227346-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-227346-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:11:27.557081   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:11:27.557106   28158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:11:27.557123   28158 buildroot.go:174] setting up certificates
	I0819 17:11:27.557131   28158 provision.go:84] configureAuth start
	I0819 17:11:27.557139   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetMachineName
	I0819 17:11:27.557392   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:11:27.559867   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.560234   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.560258   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.560475   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.562756   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.563102   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.563123   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.563271   28158 provision.go:143] copyHostCerts
	I0819 17:11:27.563305   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:11:27.563344   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:11:27.563355   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:11:27.563440   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:11:27.563586   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:11:27.563616   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:11:27.563626   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:11:27.563669   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:11:27.563741   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:11:27.563758   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:11:27.563764   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:11:27.563787   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:11:27.563848   28158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.ha-227346-m03 san=[127.0.0.1 192.168.39.95 ha-227346-m03 localhost minikube]
	I0819 17:11:27.713684   28158 provision.go:177] copyRemoteCerts
	I0819 17:11:27.713736   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:11:27.713778   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.716487   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.716844   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.716891   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.717077   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.717267   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.717458   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.717577   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:11:27.798375   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 17:11:27.798443   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:11:27.820717   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 17:11:27.820818   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:11:27.843998   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 17:11:27.844066   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 17:11:27.867190   28158 provision.go:87] duration metric: took 310.049173ms to configureAuth
	I0819 17:11:27.867217   28158 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:11:27.867595   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:11:27.867692   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.870487   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.870891   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.870916   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.871163   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.871338   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.871512   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.871665   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.871846   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:27.872026   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:27.872042   28158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:11:28.136267   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:11:28.136303   28158 main.go:141] libmachine: Checking connection to Docker...
	I0819 17:11:28.136314   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetURL
	I0819 17:11:28.137715   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Using libvirt version 6000000
	I0819 17:11:28.139969   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.140395   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.140426   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.140684   28158 main.go:141] libmachine: Docker is up and running!
	I0819 17:11:28.140699   28158 main.go:141] libmachine: Reticulating splines...
	I0819 17:11:28.140708   28158 client.go:171] duration metric: took 23.611840185s to LocalClient.Create
	I0819 17:11:28.140739   28158 start.go:167] duration metric: took 23.611901411s to libmachine.API.Create "ha-227346"
	I0819 17:11:28.140765   28158 start.go:293] postStartSetup for "ha-227346-m03" (driver="kvm2")
	I0819 17:11:28.140779   28158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:11:28.140814   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:28.141056   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:11:28.141077   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:28.143448   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.143814   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.143842   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.143991   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:28.144186   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:28.144348   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:28.144488   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:11:28.226560   28158 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:11:28.230665   28158 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:11:28.230692   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:11:28.230774   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:11:28.230867   28158 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:11:28.230878   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /etc/ssl/certs/178372.pem
	I0819 17:11:28.230983   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:11:28.239824   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:11:28.262487   28158 start.go:296] duration metric: took 121.71003ms for postStartSetup
	I0819 17:11:28.262530   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetConfigRaw
	I0819 17:11:28.263093   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:11:28.265528   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.265920   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.265949   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.266175   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:11:28.266359   28158 start.go:128] duration metric: took 23.755685114s to createHost
	I0819 17:11:28.266382   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:28.268689   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.269052   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.269073   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.269206   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:28.269387   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:28.269516   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:28.269625   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:28.269738   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:28.269892   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:28.269902   28158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:11:28.373217   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724087488.351450125
	
	I0819 17:11:28.373241   28158 fix.go:216] guest clock: 1724087488.351450125
	I0819 17:11:28.373252   28158 fix.go:229] Guest: 2024-08-19 17:11:28.351450125 +0000 UTC Remote: 2024-08-19 17:11:28.266370008 +0000 UTC m=+144.263103862 (delta=85.080117ms)
	I0819 17:11:28.373270   28158 fix.go:200] guest clock delta is within tolerance: 85.080117ms
	I0819 17:11:28.373276   28158 start.go:83] releasing machines lock for "ha-227346-m03", held for 23.862712507s
	I0819 17:11:28.373302   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:28.373639   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:11:28.376587   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.377067   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.377097   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.379453   28158 out.go:177] * Found network options:
	I0819 17:11:28.380910   28158 out.go:177]   - NO_PROXY=192.168.39.205,192.168.39.189
	W0819 17:11:28.382103   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 17:11:28.382127   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 17:11:28.382144   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:28.382732   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:28.382933   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:28.383029   28158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:11:28.383063   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	W0819 17:11:28.383088   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 17:11:28.383126   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 17:11:28.383190   28158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:11:28.383209   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:28.385767   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.386024   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.386133   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.386157   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.386257   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:28.386375   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.386398   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.386428   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:28.386557   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:28.386619   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:28.386748   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:28.386778   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:11:28.386845   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:28.386988   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:11:28.613799   28158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:11:28.620087   28158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:11:28.620174   28158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:11:28.635690   28158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 17:11:28.635713   28158 start.go:495] detecting cgroup driver to use...
	I0819 17:11:28.635767   28158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:11:28.653193   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:11:28.666341   28158 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:11:28.666408   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:11:28.681324   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:11:28.695793   28158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:11:28.821347   28158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:11:28.981851   28158 docker.go:233] disabling docker service ...
	I0819 17:11:28.981909   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:11:28.996004   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:11:29.009194   28158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:11:29.135441   28158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:11:29.252378   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:11:29.266336   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:11:29.285515   28158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:11:29.285572   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.295076   28158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:11:29.295136   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.305191   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.315169   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.324809   28158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:11:29.334804   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.344413   28158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.359937   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.371146   28158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:11:29.381156   28158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 17:11:29.381214   28158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 17:11:29.396311   28158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:11:29.407612   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:11:29.525713   28158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:11:29.666802   28158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:11:29.666870   28158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:11:29.671238   28158 start.go:563] Will wait 60s for crictl version
	I0819 17:11:29.671284   28158 ssh_runner.go:195] Run: which crictl
	I0819 17:11:29.674762   28158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:11:29.714027   28158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:11:29.714110   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:11:29.741537   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:11:29.770866   28158 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:11:29.772404   28158 out.go:177]   - env NO_PROXY=192.168.39.205
	I0819 17:11:29.773657   28158 out.go:177]   - env NO_PROXY=192.168.39.205,192.168.39.189
	I0819 17:11:29.774921   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:11:29.777679   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:29.778100   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:29.778125   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:29.778344   28158 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:11:29.782120   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:11:29.793730   28158 mustload.go:65] Loading cluster: ha-227346
	I0819 17:11:29.793942   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:11:29.794193   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:11:29.794238   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:11:29.810061   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I0819 17:11:29.810397   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:11:29.810856   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:11:29.810877   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:11:29.811174   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:11:29.811356   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:11:29.812979   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:11:29.813359   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:11:29.813397   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:11:29.827628   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0819 17:11:29.827979   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:11:29.828451   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:11:29.828479   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:11:29.828782   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:11:29.828973   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:11:29.829149   28158 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346 for IP: 192.168.39.95
	I0819 17:11:29.829160   28158 certs.go:194] generating shared ca certs ...
	I0819 17:11:29.829173   28158 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:11:29.829296   28158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:11:29.829363   28158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:11:29.829385   28158 certs.go:256] generating profile certs ...
	I0819 17:11:29.829470   28158 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key
	I0819 17:11:29.829498   28158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.a38b4dd0
	I0819 17:11:29.829513   28158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.a38b4dd0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205 192.168.39.189 192.168.39.95 192.168.39.254]
	I0819 17:11:29.904964   28158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.a38b4dd0 ...
	I0819 17:11:29.904995   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.a38b4dd0: {Name:mkd267ee1d478f75426afaa32d391f83a54bf88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:11:29.905167   28158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.a38b4dd0 ...
	I0819 17:11:29.905184   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.a38b4dd0: {Name:mkcaafd208354760e3cb5f5e92c19ee041550ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:11:29.905274   28158 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.a38b4dd0 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt
	I0819 17:11:29.905427   28158 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.a38b4dd0 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key
	I0819 17:11:29.905578   28158 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key
	I0819 17:11:29.905594   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 17:11:29.905612   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 17:11:29.905629   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 17:11:29.905648   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 17:11:29.905666   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 17:11:29.905683   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 17:11:29.905701   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 17:11:29.905719   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 17:11:29.905790   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:11:29.905831   28158 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:11:29.905844   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:11:29.905881   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:11:29.905913   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:11:29.905944   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:11:29.905997   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:11:29.906033   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /usr/share/ca-certificates/178372.pem
	I0819 17:11:29.906054   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:11:29.906073   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem -> /usr/share/ca-certificates/17837.pem
	I0819 17:11:29.906129   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:11:29.908886   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:11:29.909333   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:11:29.909356   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:11:29.909498   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:11:29.909681   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:11:29.909831   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:11:29.909951   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:11:29.985058   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 17:11:29.989402   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 17:11:29.999229   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 17:11:30.003430   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 17:11:30.014843   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 17:11:30.018683   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 17:11:30.029939   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 17:11:30.033838   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 17:11:30.044547   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 17:11:30.049425   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 17:11:30.059299   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 17:11:30.063249   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 17:11:30.074985   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:11:30.098380   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:11:30.120042   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:11:30.141832   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:11:30.163415   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 17:11:30.185609   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:11:30.206567   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:11:30.227691   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:11:30.248662   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:11:30.270287   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:11:30.292948   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:11:30.314605   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 17:11:30.330217   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 17:11:30.346151   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 17:11:30.361380   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 17:11:30.375877   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 17:11:30.391039   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 17:11:30.406523   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 17:11:30.422898   28158 ssh_runner.go:195] Run: openssl version
	I0819 17:11:30.428071   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:11:30.438558   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:11:30.443023   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:11:30.443069   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:11:30.449050   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:11:30.459207   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:11:30.469213   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:11:30.472943   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:11:30.472983   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:11:30.478039   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:11:30.488052   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:11:30.498253   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:11:30.502142   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:11:30.502189   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:11:30.507250   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:11:30.517477   28158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:11:30.521425   28158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:11:30.521479   28158 kubeadm.go:934] updating node {m03 192.168.39.95 8443 v1.31.0 crio true true} ...
	I0819 17:11:30.521567   28158 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-227346-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:11:30.521605   28158 kube-vip.go:115] generating kube-vip config ...
	I0819 17:11:30.521646   28158 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 17:11:30.537160   28158 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 17:11:30.537227   28158 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 17:11:30.537286   28158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:11:30.546975   28158 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 17:11:30.547044   28158 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 17:11:30.556377   28158 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 17:11:30.556407   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 17:11:30.556433   28158 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 17:11:30.556452   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 17:11:30.556471   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 17:11:30.556383   28158 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 17:11:30.556537   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 17:11:30.556566   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:11:30.569921   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 17:11:30.569960   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 17:11:30.569967   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 17:11:30.569991   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 17:11:30.570000   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 17:11:30.570077   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 17:11:30.599564   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 17:11:30.599624   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 17:11:31.367847   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 17:11:31.377149   28158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 17:11:31.392618   28158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:11:31.408233   28158 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 17:11:31.423050   28158 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 17:11:31.426519   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:11:31.437881   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:11:31.560914   28158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:11:31.579361   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:11:31.579688   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:11:31.579736   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:11:31.595797   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0819 17:11:31.596267   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:11:31.596829   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:11:31.596856   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:11:31.597154   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:11:31.597337   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:11:31.597464   28158 start.go:317] joinCluster: &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:11:31.597610   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 17:11:31.597625   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:11:31.600419   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:11:31.600882   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:11:31.600911   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:11:31.601001   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:11:31.601158   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:11:31.601309   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:11:31.601472   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:11:31.745973   28158 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:11:31.746014   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gklo5r.t543lv6u7mp614yz --discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-227346-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443"
	I0819 17:11:54.732468   28158 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gklo5r.t543lv6u7mp614yz --discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-227346-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443": (22.98643143s)
	I0819 17:11:54.732501   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 17:11:55.231779   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-227346-m03 minikube.k8s.io/updated_at=2024_08_19T17_11_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-227346 minikube.k8s.io/primary=false
	I0819 17:11:55.346954   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-227346-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 17:11:55.468818   28158 start.go:319] duration metric: took 23.871350348s to joinCluster
	I0819 17:11:55.468890   28158 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:11:55.469173   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:11:55.470511   28158 out.go:177] * Verifying Kubernetes components...
	I0819 17:11:55.471891   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:11:55.689068   28158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:11:55.717022   28158 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:11:55.717287   28158 kapi.go:59] client config for ha-227346: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt", KeyFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key", CAFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 17:11:55.717345   28158 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.205:8443
	I0819 17:11:55.717563   28158 node_ready.go:35] waiting up to 6m0s for node "ha-227346-m03" to be "Ready" ...
	I0819 17:11:55.717647   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:55.717657   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:55.717668   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:55.717677   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:55.721697   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:11:56.218102   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:56.218124   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:56.218133   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:56.218137   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:56.221759   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:56.717988   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:56.718011   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:56.718021   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:56.718026   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:56.721656   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:57.217743   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:57.217764   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:57.217775   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:57.217784   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:57.221371   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:57.718297   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:57.718322   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:57.718330   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:57.718333   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:57.722175   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:57.722740   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:11:58.217968   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:58.217990   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:58.217998   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:58.218002   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:58.221606   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:58.718628   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:58.718651   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:58.718659   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:58.718663   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:58.722052   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:59.217809   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:59.217830   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:59.217842   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:59.217848   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:59.220798   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:59.718523   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:59.718545   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:59.718553   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:59.718558   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:59.721957   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:00.217829   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:00.217849   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:00.217860   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:00.217864   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:00.221107   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:00.221738   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:00.718070   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:00.718092   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:00.718100   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:00.718105   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:00.720812   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:01.218328   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:01.218359   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:01.218372   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:01.218378   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:01.221632   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:01.717989   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:01.718015   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:01.718026   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:01.718032   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:01.721601   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:02.218637   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:02.218662   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:02.218672   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:02.218677   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:02.222088   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:02.222659   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:02.718521   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:02.718547   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:02.718559   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:02.718565   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:02.722975   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:03.217753   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:03.217786   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:03.217797   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:03.217803   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:03.220984   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:03.718016   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:03.718039   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:03.718052   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:03.718058   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:03.721140   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:04.218203   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:04.218227   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:04.218235   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:04.218240   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:04.222558   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:04.223332   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:04.718161   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:04.718186   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:04.718196   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:04.718200   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:04.722420   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:05.218653   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:05.218673   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:05.218681   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:05.218686   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:05.221471   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:05.718390   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:05.718414   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:05.718424   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:05.718427   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:05.722029   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:06.218649   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:06.218668   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:06.218676   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:06.218681   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:06.222025   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:06.718161   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:06.718187   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:06.718196   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:06.718202   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:06.722090   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:06.722871   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:07.217794   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:07.217816   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:07.217824   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:07.217828   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:07.223241   28158 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 17:12:07.718053   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:07.718076   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:07.718086   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:07.718092   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:07.721899   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:08.217854   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:08.217879   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:08.217890   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:08.217896   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:08.221384   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:08.718252   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:08.718285   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:08.718296   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:08.718302   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:08.721661   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:09.218524   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:09.218546   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:09.218554   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:09.218558   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:09.222605   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:09.223217   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:09.718138   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:09.718160   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:09.718169   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:09.718172   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:09.721759   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:10.218645   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:10.218670   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:10.218680   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:10.218685   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:10.222351   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:10.718475   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:10.718502   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:10.718512   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:10.718517   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:10.722308   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:11.218730   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:11.218751   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:11.218759   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:11.218763   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:11.222028   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:11.717962   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:11.717985   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:11.717993   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:11.717998   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:11.721365   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:11.721979   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:12.218499   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:12.218528   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:12.218540   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:12.218545   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:12.221993   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:12.717772   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:12.717794   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:12.717802   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:12.717806   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:12.721184   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:13.217722   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:13.217764   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.217772   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.217775   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.221515   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:13.718649   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:13.718677   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.718685   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.718690   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.722013   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:13.722694   28158 node_ready.go:49] node "ha-227346-m03" has status "Ready":"True"
	I0819 17:12:13.722721   28158 node_ready.go:38] duration metric: took 18.005141057s for node "ha-227346-m03" to be "Ready" ...
	I0819 17:12:13.722743   28158 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:12:13.722805   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:12:13.722813   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.722821   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.722825   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.741417   28158 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0819 17:12:13.749956   28158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.750055   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-9s77g
	I0819 17:12:13.750066   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.750077   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.750089   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.759295   28158 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 17:12:13.759942   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:13.759959   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.759968   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.759974   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.764905   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:13.765725   28158 pod_ready.go:93] pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:13.765743   28158 pod_ready.go:82] duration metric: took 15.756145ms for pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.765756   28158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.765816   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-r68td
	I0819 17:12:13.765826   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.765836   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.765843   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.775682   28158 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 17:12:13.776257   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:13.776271   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.776281   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.776288   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.784462   28158 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 17:12:13.785030   28158 pod_ready.go:93] pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:13.785050   28158 pod_ready.go:82] duration metric: took 19.286464ms for pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.785066   28158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.785127   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346
	I0819 17:12:13.785136   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.785145   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.785151   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.789445   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:13.790074   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:13.790088   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.790098   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.790104   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.794738   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:13.795297   28158 pod_ready.go:93] pod "etcd-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:13.795319   28158 pod_ready.go:82] duration metric: took 10.2455ms for pod "etcd-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.795331   28158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.795393   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346-m02
	I0819 17:12:13.795403   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.795417   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.795424   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.797736   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:13.798295   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:13.798312   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.798319   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.798322   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.800436   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:13.800932   28158 pod_ready.go:93] pod "etcd-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:13.800949   28158 pod_ready.go:82] duration metric: took 5.610847ms for pod "etcd-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.800957   28158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.919385   28158 request.go:632] Waited for 118.367661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346-m03
	I0819 17:12:13.919475   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346-m03
	I0819 17:12:13.919487   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.919497   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.919507   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.924018   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:14.119138   28158 request.go:632] Waited for 194.245348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:14.119192   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:14.119198   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:14.119208   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:14.119213   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:14.122664   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:14.123396   28158 pod_ready.go:93] pod "etcd-ha-227346-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:14.123412   28158 pod_ready.go:82] duration metric: took 322.449239ms for pod "etcd-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:14.123434   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:14.318754   28158 request.go:632] Waited for 195.248967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346
	I0819 17:12:14.318844   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346
	I0819 17:12:14.318855   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:14.318867   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:14.318875   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:14.322565   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:14.519571   28158 request.go:632] Waited for 196.355039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:14.519632   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:14.519637   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:14.519644   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:14.519647   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:14.522797   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:14.523450   28158 pod_ready.go:93] pod "kube-apiserver-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:14.523467   28158 pod_ready.go:82] duration metric: took 400.022092ms for pod "kube-apiserver-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:14.523476   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:14.718826   28158 request.go:632] Waited for 195.289288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m02
	I0819 17:12:14.718894   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m02
	I0819 17:12:14.718899   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:14.718907   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:14.718912   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:14.722295   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:14.919063   28158 request.go:632] Waited for 195.752698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:14.919127   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:14.919134   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:14.919146   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:14.919152   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:14.923184   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:14.923742   28158 pod_ready.go:93] pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:14.923759   28158 pod_ready.go:82] duration metric: took 400.275603ms for pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:14.923770   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:15.118989   28158 request.go:632] Waited for 195.152436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m03
	I0819 17:12:15.119062   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m03
	I0819 17:12:15.119069   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:15.119082   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:15.119090   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:15.122088   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:15.319225   28158 request.go:632] Waited for 196.358865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:15.319292   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:15.319302   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:15.319313   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:15.319320   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:15.322339   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:15.323095   28158 pod_ready.go:93] pod "kube-apiserver-ha-227346-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:15.323112   28158 pod_ready.go:82] duration metric: took 399.335876ms for pod "kube-apiserver-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:15.323122   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:15.519334   28158 request.go:632] Waited for 196.150379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346
	I0819 17:12:15.519392   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346
	I0819 17:12:15.519397   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:15.519405   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:15.519409   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:15.522566   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:15.718684   28158 request.go:632] Waited for 195.3553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:15.718769   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:15.718775   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:15.718788   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:15.718793   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:15.722303   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:15.722793   28158 pod_ready.go:93] pod "kube-controller-manager-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:15.722810   28158 pod_ready.go:82] duration metric: took 399.681477ms for pod "kube-controller-manager-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:15.722822   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:15.918907   28158 request.go:632] Waited for 196.018435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m02
	I0819 17:12:15.918992   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m02
	I0819 17:12:15.919015   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:15.919023   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:15.919034   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:15.925867   28158 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 17:12:16.118758   28158 request.go:632] Waited for 192.273548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:16.118822   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:16.118829   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:16.118849   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:16.118873   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:16.122242   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:16.122835   28158 pod_ready.go:93] pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:16.122854   28158 pod_ready.go:82] duration metric: took 400.025629ms for pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:16.122865   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:16.319266   28158 request.go:632] Waited for 196.342359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m03
	I0819 17:12:16.319325   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m03
	I0819 17:12:16.319331   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:16.319341   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:16.319346   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:16.322738   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:16.519500   28158 request.go:632] Waited for 195.729905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:16.519566   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:16.519575   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:16.519585   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:16.519595   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:16.523553   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:16.524208   28158 pod_ready.go:93] pod "kube-controller-manager-ha-227346-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:16.524228   28158 pod_ready.go:82] duration metric: took 401.354941ms for pod "kube-controller-manager-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:16.524238   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6lhlp" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:16.719710   28158 request.go:632] Waited for 195.413497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lhlp
	I0819 17:12:16.719763   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lhlp
	I0819 17:12:16.719769   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:16.719776   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:16.719781   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:16.723404   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:16.918662   28158 request.go:632] Waited for 194.283424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:16.918753   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:16.918764   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:16.918774   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:16.918778   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:16.923165   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:16.923856   28158 pod_ready.go:93] pod "kube-proxy-6lhlp" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:16.923882   28158 pod_ready.go:82] duration metric: took 399.635573ms for pod "kube-proxy-6lhlp" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:16.923895   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9xpm4" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:17.118925   28158 request.go:632] Waited for 194.967403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xpm4
	I0819 17:12:17.118989   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xpm4
	I0819 17:12:17.118997   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:17.119005   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:17.119010   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:17.122321   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:17.319330   28158 request.go:632] Waited for 196.262827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:17.319425   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:17.319437   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:17.319448   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:17.319457   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:17.323046   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:17.323651   28158 pod_ready.go:93] pod "kube-proxy-9xpm4" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:17.323670   28158 pod_ready.go:82] duration metric: took 399.767781ms for pod "kube-proxy-9xpm4" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:17.323679   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sxvbj" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:17.519746   28158 request.go:632] Waited for 195.98484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxvbj
	I0819 17:12:17.519801   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxvbj
	I0819 17:12:17.519806   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:17.519814   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:17.519818   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:17.523219   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:17.719516   28158 request.go:632] Waited for 195.248597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:17.719582   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:17.719590   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:17.719597   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:17.719601   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:17.723301   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:17.723950   28158 pod_ready.go:93] pod "kube-proxy-sxvbj" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:17.723975   28158 pod_ready.go:82] duration metric: took 400.288816ms for pod "kube-proxy-sxvbj" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:17.723988   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:17.918820   28158 request.go:632] Waited for 194.75048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346
	I0819 17:12:17.918909   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346
	I0819 17:12:17.918926   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:17.918939   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:17.918946   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:17.924269   28158 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 17:12:18.119515   28158 request.go:632] Waited for 194.352171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:18.119570   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:18.119575   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.119583   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.119598   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.122736   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:18.123500   28158 pod_ready.go:93] pod "kube-scheduler-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:18.123523   28158 pod_ready.go:82] duration metric: took 399.523466ms for pod "kube-scheduler-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:18.123536   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:18.319503   28158 request.go:632] Waited for 195.888785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m02
	I0819 17:12:18.319573   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m02
	I0819 17:12:18.319581   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.319590   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.319596   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.322847   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:18.518991   28158 request.go:632] Waited for 195.347278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:18.519080   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:18.519093   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.519105   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.519113   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.522187   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:18.522787   28158 pod_ready.go:93] pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:18.522806   28158 pod_ready.go:82] duration metric: took 399.258763ms for pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:18.522814   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:18.718903   28158 request.go:632] Waited for 196.006806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m03
	I0819 17:12:18.718958   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m03
	I0819 17:12:18.718964   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.718973   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.718977   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.722415   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:18.919588   28158 request.go:632] Waited for 196.387669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:18.919641   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:18.919648   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.919668   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.919688   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.923365   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:18.923942   28158 pod_ready.go:93] pod "kube-scheduler-ha-227346-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:18.923969   28158 pod_ready.go:82] duration metric: took 401.146883ms for pod "kube-scheduler-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:18.923984   28158 pod_ready.go:39] duration metric: took 5.201230703s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:12:18.924004   28158 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:12:18.924068   28158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:12:18.942030   28158 api_server.go:72] duration metric: took 23.473102266s to wait for apiserver process to appear ...
	I0819 17:12:18.942060   28158 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:12:18.942081   28158 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0819 17:12:18.946839   28158 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0819 17:12:18.946912   28158 round_trippers.go:463] GET https://192.168.39.205:8443/version
	I0819 17:12:18.946922   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.946937   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.946951   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.948267   28158 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 17:12:18.948441   28158 api_server.go:141] control plane version: v1.31.0
	I0819 17:12:18.948464   28158 api_server.go:131] duration metric: took 6.396635ms to wait for apiserver health ...
	I0819 17:12:18.948473   28158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:12:19.118902   28158 request.go:632] Waited for 170.356227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:12:19.118972   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:12:19.118977   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:19.118985   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:19.118990   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:19.124102   28158 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 17:12:19.130518   28158 system_pods.go:59] 24 kube-system pods found
	I0819 17:12:19.130548   28158 system_pods.go:61] "coredns-6f6b679f8f-9s77g" [28ea7cc3-2a78-4b29-82c7-7a028d357471] Running
	I0819 17:12:19.130555   28158 system_pods.go:61] "coredns-6f6b679f8f-r68td" [e48b2c24-94f7-4ca4-8f99-420706cd0cb3] Running
	I0819 17:12:19.130558   28158 system_pods.go:61] "etcd-ha-227346" [b9aafb60-6b1d-4248-8f98-d01e70c686d5] Running
	I0819 17:12:19.130561   28158 system_pods.go:61] "etcd-ha-227346-m02" [f1063ae2-aa38-45f3-836e-656af135b070] Running
	I0819 17:12:19.130565   28158 system_pods.go:61] "etcd-ha-227346-m03" [fb82b188-0187-4e5c-8829-5f498230f2dd] Running
	I0819 17:12:19.130568   28158 system_pods.go:61] "kindnet-2xfpd" [8ddc9fb1-b06d-43bb-b73e-ea2d505a36ab] Running
	I0819 17:12:19.130571   28158 system_pods.go:61] "kindnet-lwjmd" [55731455-5f1e-4499-ae63-a8ad06f5553f] Running
	I0819 17:12:19.130574   28158 system_pods.go:61] "kindnet-mk55z" [74059a09-c1fc-4d9f-a890-1f8bfa8fff1b] Running
	I0819 17:12:19.130583   28158 system_pods.go:61] "kube-apiserver-ha-227346" [31f54ee5-872e-42cc-88b9-19a4d827370f] Running
	I0819 17:12:19.130592   28158 system_pods.go:61] "kube-apiserver-ha-227346-m02" [d5a4d799-e30b-4614-9d15-4312f9842feb] Running
	I0819 17:12:19.130597   28158 system_pods.go:61] "kube-apiserver-ha-227346-m03" [cbf722b2-fc26-47e0-9f1e-4032d618b101] Running
	I0819 17:12:19.130605   28158 system_pods.go:61] "kube-controller-manager-ha-227346" [a9cc90a6-4c65-4606-8dd0-f38d3910ea72] Running
	I0819 17:12:19.130614   28158 system_pods.go:61] "kube-controller-manager-ha-227346-m02" [ecd218d3-0756-49e0-8f38-d12e877f807b] Running
	I0819 17:12:19.130622   28158 system_pods.go:61] "kube-controller-manager-ha-227346-m03" [4b169608-0121-4f1f-8054-90eb0dd36462] Running
	I0819 17:12:19.130627   28158 system_pods.go:61] "kube-proxy-6lhlp" [59fb9bb4-5dc4-421b-a00f-941b25c16b20] Running
	I0819 17:12:19.130635   28158 system_pods.go:61] "kube-proxy-9xpm4" [56e3f9ad-e32e-4a45-9184-72fd5076b2f7] Running
	I0819 17:12:19.130643   28158 system_pods.go:61] "kube-proxy-sxvbj" [59969a00-8b2e-4dd9-91d7-855f3ae4563e] Running
	I0819 17:12:19.130649   28158 system_pods.go:61] "kube-scheduler-ha-227346" [35d1cd4d-b090-4459-9872-e6f669045e2b] Running
	I0819 17:12:19.130657   28158 system_pods.go:61] "kube-scheduler-ha-227346-m02" [4b6874b9-1d5d-4b3b-8d4b-022ef04c6a3e] Running
	I0819 17:12:19.130662   28158 system_pods.go:61] "kube-scheduler-ha-227346-m03" [aed0cf90-9cff-460f-8f33-e0b6d3dc6fac] Running
	I0819 17:12:19.130670   28158 system_pods.go:61] "kube-vip-ha-227346" [0f27551d-8d73-4f32-8f52-048bb3dfa992] Running
	I0819 17:12:19.130678   28158 system_pods.go:61] "kube-vip-ha-227346-m02" [ecaf3a81-a97d-42a0-b35d-c3c6c84efb21] Running
	I0819 17:12:19.130683   28158 system_pods.go:61] "kube-vip-ha-227346-m03" [e2f0e172-5175-4dde-ba66-3e0238d33afd] Running
	I0819 17:12:19.130690   28158 system_pods.go:61] "storage-provisioner" [f4ed502e-5b16-4a13-9e5f-c1d271bea40b] Running
	I0819 17:12:19.130700   28158 system_pods.go:74] duration metric: took 182.220943ms to wait for pod list to return data ...
	I0819 17:12:19.130712   28158 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:12:19.319364   28158 request.go:632] Waited for 188.573996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/default/serviceaccounts
	I0819 17:12:19.319420   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/default/serviceaccounts
	I0819 17:12:19.319426   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:19.319433   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:19.319436   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:19.322238   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:19.322352   28158 default_sa.go:45] found service account: "default"
	I0819 17:12:19.322368   28158 default_sa.go:55] duration metric: took 191.648122ms for default service account to be created ...
	I0819 17:12:19.322377   28158 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:12:19.518751   28158 request.go:632] Waited for 196.29873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:12:19.518822   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:12:19.518836   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:19.518847   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:19.518854   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:19.524177   28158 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 17:12:19.530349   28158 system_pods.go:86] 24 kube-system pods found
	I0819 17:12:19.530374   28158 system_pods.go:89] "coredns-6f6b679f8f-9s77g" [28ea7cc3-2a78-4b29-82c7-7a028d357471] Running
	I0819 17:12:19.530380   28158 system_pods.go:89] "coredns-6f6b679f8f-r68td" [e48b2c24-94f7-4ca4-8f99-420706cd0cb3] Running
	I0819 17:12:19.530384   28158 system_pods.go:89] "etcd-ha-227346" [b9aafb60-6b1d-4248-8f98-d01e70c686d5] Running
	I0819 17:12:19.530388   28158 system_pods.go:89] "etcd-ha-227346-m02" [f1063ae2-aa38-45f3-836e-656af135b070] Running
	I0819 17:12:19.530391   28158 system_pods.go:89] "etcd-ha-227346-m03" [fb82b188-0187-4e5c-8829-5f498230f2dd] Running
	I0819 17:12:19.530394   28158 system_pods.go:89] "kindnet-2xfpd" [8ddc9fb1-b06d-43bb-b73e-ea2d505a36ab] Running
	I0819 17:12:19.530397   28158 system_pods.go:89] "kindnet-lwjmd" [55731455-5f1e-4499-ae63-a8ad06f5553f] Running
	I0819 17:12:19.530400   28158 system_pods.go:89] "kindnet-mk55z" [74059a09-c1fc-4d9f-a890-1f8bfa8fff1b] Running
	I0819 17:12:19.530404   28158 system_pods.go:89] "kube-apiserver-ha-227346" [31f54ee5-872e-42cc-88b9-19a4d827370f] Running
	I0819 17:12:19.530407   28158 system_pods.go:89] "kube-apiserver-ha-227346-m02" [d5a4d799-e30b-4614-9d15-4312f9842feb] Running
	I0819 17:12:19.530411   28158 system_pods.go:89] "kube-apiserver-ha-227346-m03" [cbf722b2-fc26-47e0-9f1e-4032d618b101] Running
	I0819 17:12:19.530414   28158 system_pods.go:89] "kube-controller-manager-ha-227346" [a9cc90a6-4c65-4606-8dd0-f38d3910ea72] Running
	I0819 17:12:19.530418   28158 system_pods.go:89] "kube-controller-manager-ha-227346-m02" [ecd218d3-0756-49e0-8f38-d12e877f807b] Running
	I0819 17:12:19.530421   28158 system_pods.go:89] "kube-controller-manager-ha-227346-m03" [4b169608-0121-4f1f-8054-90eb0dd36462] Running
	I0819 17:12:19.530427   28158 system_pods.go:89] "kube-proxy-6lhlp" [59fb9bb4-5dc4-421b-a00f-941b25c16b20] Running
	I0819 17:12:19.530430   28158 system_pods.go:89] "kube-proxy-9xpm4" [56e3f9ad-e32e-4a45-9184-72fd5076b2f7] Running
	I0819 17:12:19.530433   28158 system_pods.go:89] "kube-proxy-sxvbj" [59969a00-8b2e-4dd9-91d7-855f3ae4563e] Running
	I0819 17:12:19.530436   28158 system_pods.go:89] "kube-scheduler-ha-227346" [35d1cd4d-b090-4459-9872-e6f669045e2b] Running
	I0819 17:12:19.530439   28158 system_pods.go:89] "kube-scheduler-ha-227346-m02" [4b6874b9-1d5d-4b3b-8d4b-022ef04c6a3e] Running
	I0819 17:12:19.530445   28158 system_pods.go:89] "kube-scheduler-ha-227346-m03" [aed0cf90-9cff-460f-8f33-e0b6d3dc6fac] Running
	I0819 17:12:19.530454   28158 system_pods.go:89] "kube-vip-ha-227346" [0f27551d-8d73-4f32-8f52-048bb3dfa992] Running
	I0819 17:12:19.530458   28158 system_pods.go:89] "kube-vip-ha-227346-m02" [ecaf3a81-a97d-42a0-b35d-c3c6c84efb21] Running
	I0819 17:12:19.530463   28158 system_pods.go:89] "kube-vip-ha-227346-m03" [e2f0e172-5175-4dde-ba66-3e0238d33afd] Running
	I0819 17:12:19.530471   28158 system_pods.go:89] "storage-provisioner" [f4ed502e-5b16-4a13-9e5f-c1d271bea40b] Running
	I0819 17:12:19.530479   28158 system_pods.go:126] duration metric: took 208.094264ms to wait for k8s-apps to be running ...
	I0819 17:12:19.530490   28158 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:12:19.530546   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:12:19.547883   28158 system_svc.go:56] duration metric: took 17.386016ms WaitForService to wait for kubelet
	I0819 17:12:19.547914   28158 kubeadm.go:582] duration metric: took 24.078991194s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:12:19.547931   28158 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:12:19.719314   28158 request.go:632] Waited for 171.31193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes
	I0819 17:12:19.719361   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes
	I0819 17:12:19.719366   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:19.719376   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:19.719380   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:19.723418   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:19.724417   28158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:12:19.724445   28158 node_conditions.go:123] node cpu capacity is 2
	I0819 17:12:19.724460   28158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:12:19.724466   28158 node_conditions.go:123] node cpu capacity is 2
	I0819 17:12:19.724473   28158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:12:19.724479   28158 node_conditions.go:123] node cpu capacity is 2
	I0819 17:12:19.724486   28158 node_conditions.go:105] duration metric: took 176.55004ms to run NodePressure ...
	I0819 17:12:19.724502   28158 start.go:241] waiting for startup goroutines ...
	I0819 17:12:19.724536   28158 start.go:255] writing updated cluster config ...
	I0819 17:12:19.724873   28158 ssh_runner.go:195] Run: rm -f paused
	I0819 17:12:19.775087   28158 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 17:12:19.777054   28158 out.go:177] * Done! kubectl is now configured to use "ha-227346" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.412896821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087757412874746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=325bcf71-daa0-435d-b240-576d783f56e9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.413569155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7f9e3c5-b455-4b29-87e0-5b9617ffe23b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.413642509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7f9e3c5-b455-4b29-87e0-5b9617ffe23b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.413870291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4,PodSandboxId:d17668585f28306f28db99feece1784de6aa63f06dbcfb4366fb7eec9fdb176f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401609162672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7400c3a3872edf318e75739648aae2611b2546097e60db290746f1fb56754ea8,PodSandboxId:60ebfd22a6daabbec9b4f7ca3492bed16e92aa4d3b564af76efe421db0d628b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724087401600598615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6,PodSandboxId:92d2a303608839cc1109ab91e6e3155adb29d42f091e67491ea5ad49df2e0380,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401549192860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-9
4f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9,PodSandboxId:4c49ea56223c8d049bba8bbcbafc5116d535d1f031f0daa330a9203c93581c40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:C
ONTAINER_RUNNING,CreatedAt:1724087389563269206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd,PodSandboxId:8ca1f3b2cdf2920deef82a979d748a78e211c1517103c86f95077db71ba0d2c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:17240873
85840795635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5eaaf42a1219e6ac1e9000a3ba1082671e92052332baed607868cb6bac05eda,PodSandboxId:4f89b348afb842ede879b3eab75131b82a33f6c30ac932b902c0c3018906c3a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17240873770
42532913,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc9c7f75dbb6c6cee4b15b7cca50da4,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511d8c1a0ec341947fb949dde70c27d300d923559ea17954b9d304b815939453,PodSandboxId:2a4dcc8805294b2b027a86fa7dd117e19c7d329498c1f5c368751aea86a1f8b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724087374591660655,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547,PodSandboxId:0cc361224291f84c537bb02cdf155e50cec0ad5e03d0cbac72100ec67c93cbc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724087374543971049,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed,PodSandboxId:9fe350c701f53099986c6b3688d025330851cb1b25086fc0bf320f002669768c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087374490404389,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577,PodSandboxId:3813d79e090bd79f6c0d0c0726b97f41d07f43b957e066eac7f2fcb9118418b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087374524455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7f9e3c5-b455-4b29-87e0-5b9617ffe23b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.447386899Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4974611-5584-454d-89e3-87ef769c8969 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.447457091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4974611-5584-454d-89e3-87ef769c8969 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.448401056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f882e3a-a911-4280-9da0-8cb335423604 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.448810279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087757448789895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f882e3a-a911-4280-9da0-8cb335423604 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.449464176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a22ef66-0d69-4b66-98d2-b6cf4d65b334 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.449516791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a22ef66-0d69-4b66-98d2-b6cf4d65b334 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.449729667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4,PodSandboxId:d17668585f28306f28db99feece1784de6aa63f06dbcfb4366fb7eec9fdb176f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401609162672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7400c3a3872edf318e75739648aae2611b2546097e60db290746f1fb56754ea8,PodSandboxId:60ebfd22a6daabbec9b4f7ca3492bed16e92aa4d3b564af76efe421db0d628b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724087401600598615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6,PodSandboxId:92d2a303608839cc1109ab91e6e3155adb29d42f091e67491ea5ad49df2e0380,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401549192860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-9
4f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9,PodSandboxId:4c49ea56223c8d049bba8bbcbafc5116d535d1f031f0daa330a9203c93581c40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:C
ONTAINER_RUNNING,CreatedAt:1724087389563269206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd,PodSandboxId:8ca1f3b2cdf2920deef82a979d748a78e211c1517103c86f95077db71ba0d2c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:17240873
85840795635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5eaaf42a1219e6ac1e9000a3ba1082671e92052332baed607868cb6bac05eda,PodSandboxId:4f89b348afb842ede879b3eab75131b82a33f6c30ac932b902c0c3018906c3a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17240873770
42532913,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc9c7f75dbb6c6cee4b15b7cca50da4,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511d8c1a0ec341947fb949dde70c27d300d923559ea17954b9d304b815939453,PodSandboxId:2a4dcc8805294b2b027a86fa7dd117e19c7d329498c1f5c368751aea86a1f8b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724087374591660655,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547,PodSandboxId:0cc361224291f84c537bb02cdf155e50cec0ad5e03d0cbac72100ec67c93cbc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724087374543971049,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed,PodSandboxId:9fe350c701f53099986c6b3688d025330851cb1b25086fc0bf320f002669768c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087374490404389,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577,PodSandboxId:3813d79e090bd79f6c0d0c0726b97f41d07f43b957e066eac7f2fcb9118418b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087374524455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a22ef66-0d69-4b66-98d2-b6cf4d65b334 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.483630537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47ce6878-865e-4637-b433-146fcf2d41a8 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.483699217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47ce6878-865e-4637-b433-146fcf2d41a8 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.484899324Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee5b0fde-9375-4ba7-a027-c25f3eb0bf6f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.485359309Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087757485335786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee5b0fde-9375-4ba7-a027-c25f3eb0bf6f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.485827995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b1f9dec-3596-4bc1-bf7b-6729dff84a23 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.485876102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b1f9dec-3596-4bc1-bf7b-6729dff84a23 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.486128867Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4,PodSandboxId:d17668585f28306f28db99feece1784de6aa63f06dbcfb4366fb7eec9fdb176f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401609162672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7400c3a3872edf318e75739648aae2611b2546097e60db290746f1fb56754ea8,PodSandboxId:60ebfd22a6daabbec9b4f7ca3492bed16e92aa4d3b564af76efe421db0d628b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724087401600598615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6,PodSandboxId:92d2a303608839cc1109ab91e6e3155adb29d42f091e67491ea5ad49df2e0380,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401549192860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-9
4f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9,PodSandboxId:4c49ea56223c8d049bba8bbcbafc5116d535d1f031f0daa330a9203c93581c40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:C
ONTAINER_RUNNING,CreatedAt:1724087389563269206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd,PodSandboxId:8ca1f3b2cdf2920deef82a979d748a78e211c1517103c86f95077db71ba0d2c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:17240873
85840795635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5eaaf42a1219e6ac1e9000a3ba1082671e92052332baed607868cb6bac05eda,PodSandboxId:4f89b348afb842ede879b3eab75131b82a33f6c30ac932b902c0c3018906c3a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17240873770
42532913,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc9c7f75dbb6c6cee4b15b7cca50da4,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511d8c1a0ec341947fb949dde70c27d300d923559ea17954b9d304b815939453,PodSandboxId:2a4dcc8805294b2b027a86fa7dd117e19c7d329498c1f5c368751aea86a1f8b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724087374591660655,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547,PodSandboxId:0cc361224291f84c537bb02cdf155e50cec0ad5e03d0cbac72100ec67c93cbc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724087374543971049,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed,PodSandboxId:9fe350c701f53099986c6b3688d025330851cb1b25086fc0bf320f002669768c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087374490404389,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577,PodSandboxId:3813d79e090bd79f6c0d0c0726b97f41d07f43b957e066eac7f2fcb9118418b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087374524455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b1f9dec-3596-4bc1-bf7b-6729dff84a23 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.520465799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fb40990-013f-421d-9e9c-9d02fd7aef67 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.520542945Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fb40990-013f-421d-9e9c-9d02fd7aef67 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.521645501Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2872115-6e43-4b7c-8677-19d592742329 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.522186333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087757522160764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2872115-6e43-4b7c-8677-19d592742329 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.522746259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f6b5b39-0033-4a00-abb0-b24b1b9de51c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.522835239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f6b5b39-0033-4a00-abb0-b24b1b9de51c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:15:57 ha-227346 crio[676]: time="2024-08-19 17:15:57.523088714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4,PodSandboxId:d17668585f28306f28db99feece1784de6aa63f06dbcfb4366fb7eec9fdb176f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401609162672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7400c3a3872edf318e75739648aae2611b2546097e60db290746f1fb56754ea8,PodSandboxId:60ebfd22a6daabbec9b4f7ca3492bed16e92aa4d3b564af76efe421db0d628b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724087401600598615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6,PodSandboxId:92d2a303608839cc1109ab91e6e3155adb29d42f091e67491ea5ad49df2e0380,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401549192860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-9
4f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9,PodSandboxId:4c49ea56223c8d049bba8bbcbafc5116d535d1f031f0daa330a9203c93581c40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:C
ONTAINER_RUNNING,CreatedAt:1724087389563269206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd,PodSandboxId:8ca1f3b2cdf2920deef82a979d748a78e211c1517103c86f95077db71ba0d2c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:17240873
85840795635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5eaaf42a1219e6ac1e9000a3ba1082671e92052332baed607868cb6bac05eda,PodSandboxId:4f89b348afb842ede879b3eab75131b82a33f6c30ac932b902c0c3018906c3a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17240873770
42532913,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc9c7f75dbb6c6cee4b15b7cca50da4,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511d8c1a0ec341947fb949dde70c27d300d923559ea17954b9d304b815939453,PodSandboxId:2a4dcc8805294b2b027a86fa7dd117e19c7d329498c1f5c368751aea86a1f8b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724087374591660655,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547,PodSandboxId:0cc361224291f84c537bb02cdf155e50cec0ad5e03d0cbac72100ec67c93cbc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724087374543971049,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed,PodSandboxId:9fe350c701f53099986c6b3688d025330851cb1b25086fc0bf320f002669768c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087374490404389,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577,PodSandboxId:3813d79e090bd79f6c0d0c0726b97f41d07f43b957e066eac7f2fcb9118418b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087374524455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f6b5b39-0033-4a00-abb0-b24b1b9de51c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0624a8dba0695       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     5 minutes ago       Running             coredns                   0                   d17668585f283       coredns-6f6b679f8f-9s77g
	7400c3a3872ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     5 minutes ago       Running             storage-provisioner       0                   60ebfd22a6daa       storage-provisioner
	e4e823e549cc3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     5 minutes ago       Running             coredns                   0                   92d2a30360883       coredns-6f6b679f8f-r68td
	59dabea0b2cb1       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b   6 minutes ago       Running             kindnet-cni               0                   4c49ea56223c8       kindnet-lwjmd
	25c817915a7df       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                     6 minutes ago       Running             kube-proxy                0                   8ca1f3b2cdf29       kube-proxy-9xpm4
	b5eaaf42a1219       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f    6 minutes ago       Running             kube-vip                  0                   4f89b348afb84       kube-vip-ha-227346
	511d8c1a0ec34       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                     6 minutes ago       Running             kube-apiserver            0                   2a4dcc8805294       kube-apiserver-ha-227346
	7367ba44817a2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                     6 minutes ago       Running             kube-controller-manager   0                   0cc361224291f       kube-controller-manager-ha-227346
	ded6224ece6e4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                     6 minutes ago       Running             kube-scheduler            0                   3813d79e090bd       kube-scheduler-ha-227346
	c1727fa7d7c9f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                     6 minutes ago       Running             etcd                      0                   9fe350c701f53       etcd-ha-227346
	
	
	==> coredns [0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4] <==
	[INFO] 10.244.2.2:37607 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177878s
	[INFO] 10.244.2.2:42454 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00335028s
	[INFO] 10.244.2.2:49221 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132265s
	[INFO] 10.244.2.2:58999 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151192s
	[INFO] 10.244.1.2:52835 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001787677s
	[INFO] 10.244.1.2:36917 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101601s
	[INFO] 10.244.1.2:56268 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197112s
	[INFO] 10.244.1.2:53208 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001266869s
	[INFO] 10.244.1.2:32844 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072428s
	[INFO] 10.244.1.3:44481 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088917s
	[INFO] 10.244.1.3:46305 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145954s
	[INFO] 10.244.2.2:55212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123615s
	[INFO] 10.244.2.2:34683 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089323s
	[INFO] 10.244.2.2:41746 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156593s
	[INFO] 10.244.1.2:55757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148488s
	[INFO] 10.244.1.2:40727 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010542s
	[INFO] 10.244.1.3:44262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115488s
	[INFO] 10.244.1.3:45504 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123275s
	[INFO] 10.244.2.2:42245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251796s
	[INFO] 10.244.2.2:36792 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000165895s
	[INFO] 10.244.2.2:45239 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156083s
	[INFO] 10.244.1.2:36640 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091031s
	[INFO] 10.244.1.2:39845 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090422s
	[INFO] 10.244.1.3:44584 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131606s
	[INFO] 10.244.1.3:41596 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084019s
	
	
	==> coredns [e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6] <==
	[INFO] 10.244.1.2:47163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230381s
	[INFO] 10.244.1.2:50433 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.003436809s
	[INFO] 10.244.1.2:59195 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000100636s
	[INFO] 10.244.1.2:32814 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001599123s
	[INFO] 10.244.2.2:39529 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195202s
	[INFO] 10.244.2.2:33472 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000242135s
	[INFO] 10.244.1.2:51221 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147815s
	[INFO] 10.244.1.2:43702 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097631s
	[INFO] 10.244.1.2:40951 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142664s
	[INFO] 10.244.1.3:35658 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011583s
	[INFO] 10.244.1.3:54609 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001746971s
	[INFO] 10.244.1.3:38577 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187309s
	[INFO] 10.244.1.3:55629 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001059113s
	[INFO] 10.244.1.3:53767 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013021s
	[INFO] 10.244.1.3:58767 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094503s
	[INFO] 10.244.2.2:44014 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108961s
	[INFO] 10.244.1.2:50869 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144661s
	[INFO] 10.244.1.2:41585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067801s
	[INFO] 10.244.1.3:33644 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235041s
	[INFO] 10.244.1.3:35998 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158822s
	[INFO] 10.244.2.2:49281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113706s
	[INFO] 10.244.1.2:55115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127183s
	[INFO] 10.244.1.2:50067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143513s
	[INFO] 10.244.1.3:45276 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119251s
	[INFO] 10.244.1.3:34581 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000202685s
	
	
	==> describe nodes <==
	Name:               ha-227346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_09_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:09:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:15:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:15:18 +0000   Mon, 19 Aug 2024 17:09:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:15:18 +0000   Mon, 19 Aug 2024 17:09:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:15:18 +0000   Mon, 19 Aug 2024 17:09:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:15:18 +0000   Mon, 19 Aug 2024 17:10:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    ha-227346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 80471ea49a664581949d80643cd4d82b
	  System UUID:                80471ea4-9a66-4581-949d-80643cd4d82b
	  Boot ID:                    b4e046ad-f0c8-4e0a-a3c8-ccc4927ebc7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-9s77g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m12s
	  kube-system                 coredns-6f6b679f8f-r68td             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m12s
	  kube-system                 etcd-ha-227346                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-lwjmd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m12s
	  kube-system                 kube-apiserver-ha-227346             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-ha-227346    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-proxy-9xpm4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-scheduler-ha-227346             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-227346                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m11s  kube-proxy       
	  Normal  Starting                 6m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m16s  kubelet          Node ha-227346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m16s  kubelet          Node ha-227346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s  kubelet          Node ha-227346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m13s  node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  NodeReady                5m56s  kubelet          Node ha-227346 status is now: NodeReady
	  Normal  RegisteredNode           5m10s  node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           3m58s  node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	
	
	Name:               ha-227346-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_10_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:10:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:13:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 17:12:41 +0000   Mon, 19 Aug 2024 17:14:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 17:12:41 +0000   Mon, 19 Aug 2024 17:14:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 17:12:41 +0000   Mon, 19 Aug 2024 17:14:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 17:12:41 +0000   Mon, 19 Aug 2024 17:14:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    ha-227346-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 feb788fca1734d35a419eead2319624a
	  System UUID:                feb788fc-a173-4d35-a419-eead2319624a
	  Boot ID:                    7455d09e-c221-4dad-aeae-f6832bcbda8f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dncbb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  default                     busybox-7dff88458-k75xm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-227346-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m16s
	  kube-system                 kindnet-mk55z                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m18s
	  kube-system                 kube-apiserver-ha-227346-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-ha-227346-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-proxy-6lhlp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-scheduler-ha-227346-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-vip-ha-227346-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m18s                  cidrAllocator    Node ha-227346-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-227346-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-227346-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node ha-227346-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-227346-m02 status is now: NodeNotReady
	
	
	Name:               ha-227346-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_11_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:11:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:15:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:12:52 +0000   Mon, 19 Aug 2024 17:11:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:12:52 +0000   Mon, 19 Aug 2024 17:11:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:12:52 +0000   Mon, 19 Aug 2024 17:11:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:12:52 +0000   Mon, 19 Aug 2024 17:12:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-227346-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a013b2ee813e40c8a8d8936e0473daaa
	  System UUID:                a013b2ee-813e-40c8-a8d8-936e0473daaa
	  Boot ID:                    370f4f2f-3248-4a84-a8d1-aff69aaf456c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cvdvs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-227346-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m4s
	  kube-system                 kindnet-2xfpd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m6s
	  kube-system                 kube-apiserver-ha-227346-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-ha-227346-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-sxvbj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-scheduler-ha-227346-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 kube-vip-ha-227346-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     4m6s                 cidrAllocator    Node ha-227346-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node ha-227346-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node ha-227346-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node ha-227346-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	
	
	Name:               ha-227346-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_12_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:12:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:15:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:13:28 +0000   Mon, 19 Aug 2024 17:12:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:13:28 +0000   Mon, 19 Aug 2024 17:12:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:13:28 +0000   Mon, 19 Aug 2024 17:12:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:13:28 +0000   Mon, 19 Aug 2024 17:13:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-227346-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8069ae3ff9145c9b8ed7bff35cdea96
	  System UUID:                d8069ae3-ff91-45c9-b8ed-7bff35cdea96
	  Boot ID:                    1c56de0c-688b-4d9f-bbf7-32b68d2778a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sctvz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-7ktdr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m55s              kube-proxy       
	  Normal  NodeAllocatableEnforced  3m1s               kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m                 cidrAllocator    Node ha-227346-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m                 node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m1s)  kubelet          Node ha-227346-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m1s)  kubelet          Node ha-227346-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m1s)  kubelet          Node ha-227346-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m58s              node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal  RegisteredNode           2m58s              node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal  NodeReady                2m40s              kubelet          Node ha-227346-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 17:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050820] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037447] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.694222] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.744763] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.535363] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.218849] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.053481] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061538] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.190350] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.134022] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.260627] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +3.698622] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +3.234958] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.058962] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.409298] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.084115] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.075846] kauditd_printk_skb: 23 callbacks suppressed
	[Aug19 17:10] kauditd_printk_skb: 36 callbacks suppressed
	[ +43.945746] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed] <==
	{"level":"warn","ts":"2024-08-19T17:15:57.810034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.817427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.820533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.827953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.833130Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.836204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.838633Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.841994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.844333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.850615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.851898Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.857523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.865315Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.865540Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.869208Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.873217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.875596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.878141Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.880495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.892219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.898475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.905314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.907902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.910893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:15:57.935840Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:15:57 up 6 min,  0 users,  load average: 0.08, 0.18, 0.10
	Linux ha-227346 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9] <==
	I0819 17:15:20.463589       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:15:30.472225       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:15:30.472340       1 main.go:299] handling current node
	I0819 17:15:30.472372       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:15:30.472393       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:15:30.472582       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0819 17:15:30.472615       1 main.go:322] Node ha-227346-m03 has CIDR [10.244.2.0/24] 
	I0819 17:15:30.472706       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:15:30.472728       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:15:40.468374       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:15:40.468503       1 main.go:299] handling current node
	I0819 17:15:40.468554       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:15:40.468604       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:15:40.468812       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0819 17:15:40.468837       1 main.go:322] Node ha-227346-m03 has CIDR [10.244.2.0/24] 
	I0819 17:15:40.468909       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:15:40.468928       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:15:50.462511       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:15:50.463371       1 main.go:299] handling current node
	I0819 17:15:50.463408       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:15:50.463475       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:15:50.463671       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0819 17:15:50.463696       1 main.go:322] Node ha-227346-m03 has CIDR [10.244.2.0/24] 
	I0819 17:15:50.463768       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:15:50.463787       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [511d8c1a0ec341947fb949dde70c27d300d923559ea17954b9d304b815939453] <==
	I0819 17:09:39.452136       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 17:09:39.607665       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 17:09:39.616011       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.205]
	I0819 17:09:39.617696       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:09:39.623790       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 17:09:39.669994       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 17:09:40.855733       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 17:09:40.880785       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 17:09:41.007349       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 17:09:45.127522       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 17:09:45.324004       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0819 17:12:26.023041       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33338: use of closed network connection
	E0819 17:12:26.208281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33354: use of closed network connection
	E0819 17:12:26.390037       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33368: use of closed network connection
	E0819 17:12:26.564430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33382: use of closed network connection
	E0819 17:12:26.730728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59890: use of closed network connection
	E0819 17:12:26.894819       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59906: use of closed network connection
	E0819 17:12:27.058806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59930: use of closed network connection
	E0819 17:12:27.229813       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59950: use of closed network connection
	E0819 17:12:27.687890       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59982: use of closed network connection
	E0819 17:12:27.849598       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60012: use of closed network connection
	E0819 17:12:28.027989       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60028: use of closed network connection
	E0819 17:12:28.192330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60044: use of closed network connection
	E0819 17:12:28.365199       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60064: use of closed network connection
	E0819 17:12:28.526739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60084: use of closed network connection
	
	
	==> kube-controller-manager [7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547] <==
	I0819 17:12:57.183782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:57.246143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:57.482926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:57.656966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:59.639619       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-227346-m04"
	I0819 17:12:59.639793       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:59.683717       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:59.845974       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:59.901098       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:13:07.478004       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:13:17.753434       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-227346-m04"
	I0819 17:13:17.754345       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:13:17.767937       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:13:19.654976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:13:28.185018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:14:14.680741       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-227346-m04"
	I0819 17:14:14.681323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m02"
	I0819 17:14:14.731114       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m02"
	I0819 17:14:14.858399       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.418068ms"
	I0819 17:14:14.858570       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.195µs"
	I0819 17:14:14.920460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m02"
	I0819 17:14:14.936929       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.317377ms"
	I0819 17:14:14.937151       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="173.378µs"
	I0819 17:14:19.988820       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m02"
	I0819 17:15:18.733334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346"
	
	
	==> kube-proxy [25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:09:46.147178       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:09:46.158672       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.205"]
	E0819 17:09:46.158812       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:09:46.198739       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:09:46.198779       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:09:46.198806       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:09:46.201038       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:09:46.201309       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:09:46.201339       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:09:46.204850       1 config.go:197] "Starting service config controller"
	I0819 17:09:46.204894       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:09:46.204926       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:09:46.204930       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:09:46.206648       1 config.go:326] "Starting node config controller"
	I0819 17:09:46.206677       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:09:46.306379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:09:46.306521       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:09:46.306796       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577] <==
	W0819 17:09:39.042786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 17:09:39.042870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:09:39.061817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:09:39.061879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:09:41.006924       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:11:51.662271       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sxvbj\": pod kube-proxy-sxvbj is already assigned to node \"ha-227346-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sxvbj" node="ha-227346-m03"
	E0819 17:11:51.662435       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sxvbj\": pod kube-proxy-sxvbj is already assigned to node \"ha-227346-m03\"" pod="kube-system/kube-proxy-sxvbj"
	I0819 17:11:51.662497       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sxvbj" node="ha-227346-m03"
	I0819 17:12:20.628625       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="362e7b22-83fb-4748-a048-9ef1f609910d" pod="default/busybox-7dff88458-k75xm" assumedNode="ha-227346-m02" currentNode="ha-227346-m03"
	E0819 17:12:20.632886       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k75xm\": pod busybox-7dff88458-k75xm is already assigned to node \"ha-227346-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-k75xm" node="ha-227346-m03"
	E0819 17:12:20.632974       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 362e7b22-83fb-4748-a048-9ef1f609910d(default/busybox-7dff88458-k75xm) was assumed on ha-227346-m03 but assigned to ha-227346-m02" pod="default/busybox-7dff88458-k75xm"
	E0819 17:12:20.633012       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k75xm\": pod busybox-7dff88458-k75xm is already assigned to node \"ha-227346-m02\"" pod="default/busybox-7dff88458-k75xm"
	I0819 17:12:20.633123       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-k75xm" node="ha-227346-m02"
	E0819 17:12:20.698274       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-c789k\": pod busybox-7dff88458-c789k is already assigned to node \"ha-227346\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-c789k" node="ha-227346"
	E0819 17:12:20.698997       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-c789k\": pod busybox-7dff88458-c789k is already assigned to node \"ha-227346\"" pod="default/busybox-7dff88458-c789k"
	E0819 17:12:57.159264       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sctvz\": pod kindnet-sctvz is already assigned to node \"ha-227346-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sctvz" node="ha-227346-m04"
	E0819 17:12:57.159361       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bbe42f64-8bcd-40dd-8a98-f0ca95e3ade7(kube-system/kindnet-sctvz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sctvz"
	E0819 17:12:57.159407       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sctvz\": pod kindnet-sctvz is already assigned to node \"ha-227346-m04\"" pod="kube-system/kindnet-sctvz"
	I0819 17:12:57.159455       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sctvz" node="ha-227346-m04"
	E0819 17:12:57.162787       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7ktdr\": pod kube-proxy-7ktdr is already assigned to node \"ha-227346-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7ktdr" node="ha-227346-m04"
	E0819 17:12:57.162854       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7ktdr\": pod kube-proxy-7ktdr is already assigned to node \"ha-227346-m04\"" pod="kube-system/kube-proxy-7ktdr"
	E0819 17:12:57.199546       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pzs6h\": pod kube-proxy-pzs6h is already assigned to node \"ha-227346-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pzs6h" node="ha-227346-m04"
	E0819 17:12:57.199793       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pzs6h\": pod kube-proxy-pzs6h is already assigned to node \"ha-227346-m04\"" pod="kube-system/kube-proxy-pzs6h"
	E0819 17:12:57.200501       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9clnw\": pod kindnet-9clnw is already assigned to node \"ha-227346-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9clnw" node="ha-227346-m04"
	E0819 17:12:57.201139       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9clnw\": pod kindnet-9clnw is already assigned to node \"ha-227346-m04\"" pod="kube-system/kindnet-9clnw"
	
	
	==> kubelet <==
	Aug 19 17:14:41 ha-227346 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:14:41 ha-227346 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:14:41 ha-227346 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:14:41 ha-227346 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:14:41 ha-227346 kubelet[1301]: E0819 17:14:41.081675    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087681081360610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:14:41 ha-227346 kubelet[1301]: E0819 17:14:41.081729    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087681081360610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:14:51 ha-227346 kubelet[1301]: E0819 17:14:51.083975    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087691083552497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:14:51 ha-227346 kubelet[1301]: E0819 17:14:51.084046    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087691083552497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:01 ha-227346 kubelet[1301]: E0819 17:15:01.086570    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087701086118432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:01 ha-227346 kubelet[1301]: E0819 17:15:01.086891    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087701086118432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:11 ha-227346 kubelet[1301]: E0819 17:15:11.088773    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087711088458346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:11 ha-227346 kubelet[1301]: E0819 17:15:11.088816    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087711088458346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:21 ha-227346 kubelet[1301]: E0819 17:15:21.090281    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087721089961043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:21 ha-227346 kubelet[1301]: E0819 17:15:21.090705    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087721089961043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:31 ha-227346 kubelet[1301]: E0819 17:15:31.092703    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087731092319557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:31 ha-227346 kubelet[1301]: E0819 17:15:31.093099    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087731092319557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:41 ha-227346 kubelet[1301]: E0819 17:15:41.004104    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:15:41 ha-227346 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:15:41 ha-227346 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:15:41 ha-227346 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:15:41 ha-227346 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:15:41 ha-227346 kubelet[1301]: E0819 17:15:41.095275    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087741094897643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:41 ha-227346 kubelet[1301]: E0819 17:15:41.095322    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087741094897643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:51 ha-227346 kubelet[1301]: E0819 17:15:51.097288    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087751096913876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:51 ha-227346 kubelet[1301]: E0819 17:15:51.097659    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087751096913876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-227346 -n ha-227346
helpers_test.go:261: (dbg) Run:  kubectl --context ha-227346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (62.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr: exit status 3 (3.196437329s)

                                                
                                                
-- stdout --
	ha-227346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-227346-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:16:02.442660   32917 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:16:02.442898   32917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:02.442906   32917 out.go:358] Setting ErrFile to fd 2...
	I0819 17:16:02.442910   32917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:02.443078   32917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:16:02.443245   32917 out.go:352] Setting JSON to false
	I0819 17:16:02.443268   32917 mustload.go:65] Loading cluster: ha-227346
	I0819 17:16:02.443371   32917 notify.go:220] Checking for updates...
	I0819 17:16:02.443605   32917 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:16:02.443618   32917 status.go:255] checking status of ha-227346 ...
	I0819 17:16:02.443993   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:02.444047   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:02.463376   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46459
	I0819 17:16:02.463832   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:02.464434   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:02.464475   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:02.464989   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:02.465176   32917 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:16:02.467062   32917 status.go:330] ha-227346 host status = "Running" (err=<nil>)
	I0819 17:16:02.467079   32917 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:02.467354   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:02.467395   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:02.482129   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I0819 17:16:02.482578   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:02.483246   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:02.483265   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:02.483535   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:02.483714   32917 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:16:02.486388   32917 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:02.486800   32917 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:02.486822   32917 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:02.486969   32917 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:02.487244   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:02.487278   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:02.502306   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34239
	I0819 17:16:02.502734   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:02.503242   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:02.503265   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:02.503610   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:02.503776   32917 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:16:02.503970   32917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:02.503988   32917 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:16:02.506486   32917 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:02.506904   32917 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:02.506925   32917 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:02.507102   32917 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:16:02.507279   32917 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:16:02.507450   32917 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:16:02.507631   32917 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:16:02.587608   32917 ssh_runner.go:195] Run: systemctl --version
	I0819 17:16:02.593699   32917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:02.608472   32917 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:02.608513   32917 api_server.go:166] Checking apiserver status ...
	I0819 17:16:02.608553   32917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:02.621551   32917 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0819 17:16:02.630253   32917 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:02.630306   32917 ssh_runner.go:195] Run: ls
	I0819 17:16:02.634298   32917 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:02.638264   32917 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:02.638283   32917 status.go:422] ha-227346 apiserver status = Running (err=<nil>)
	I0819 17:16:02.638292   32917 status.go:257] ha-227346 status: &{Name:ha-227346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:02.638308   32917 status.go:255] checking status of ha-227346-m02 ...
	I0819 17:16:02.638625   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:02.638657   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:02.654919   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44699
	I0819 17:16:02.655301   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:02.655746   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:02.655766   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:02.656073   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:02.656217   32917 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:16:02.657686   32917 status.go:330] ha-227346-m02 host status = "Running" (err=<nil>)
	I0819 17:16:02.657703   32917 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:16:02.657979   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:02.658009   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:02.672636   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0819 17:16:02.673042   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:02.673538   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:02.673556   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:02.673827   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:02.674027   32917 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:16:02.676634   32917 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:02.677107   32917 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:16:02.677128   32917 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:02.677331   32917 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:16:02.677634   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:02.677666   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:02.691955   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33123
	I0819 17:16:02.692350   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:02.692812   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:02.692831   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:02.693133   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:02.693314   32917 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:16:02.693506   32917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:02.693533   32917 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:16:02.696036   32917 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:02.696489   32917 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:16:02.696514   32917 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:02.696654   32917 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:16:02.696817   32917 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:16:02.696982   32917 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:16:02.697120   32917 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	W0819 17:16:05.261104   32917 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.189:22: connect: no route to host
	W0819 17:16:05.261225   32917 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	E0819 17:16:05.261250   32917 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:05.261272   32917 status.go:257] ha-227346-m02 status: &{Name:ha-227346-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 17:16:05.261289   32917 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:05.261299   32917 status.go:255] checking status of ha-227346-m03 ...
	I0819 17:16:05.261612   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:05.261654   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:05.276777   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33885
	I0819 17:16:05.277249   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:05.277716   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:05.277741   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:05.278035   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:05.278170   32917 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:16:05.279761   32917 status.go:330] ha-227346-m03 host status = "Running" (err=<nil>)
	I0819 17:16:05.279780   32917 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:05.280098   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:05.280149   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:05.294400   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43043
	I0819 17:16:05.294755   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:05.295185   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:05.295210   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:05.295488   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:05.295672   32917 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:16:05.298432   32917 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:05.298834   32917 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:05.298862   32917 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:05.298991   32917 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:05.299286   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:05.299320   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:05.313620   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41905
	I0819 17:16:05.314000   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:05.314395   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:05.314415   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:05.314701   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:05.314893   32917 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:16:05.315035   32917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:05.315057   32917 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:16:05.318134   32917 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:05.318546   32917 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:05.318570   32917 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:05.318696   32917 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:16:05.318864   32917 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:16:05.319006   32917 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:16:05.319157   32917 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:16:05.400078   32917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:05.414933   32917 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:05.414959   32917 api_server.go:166] Checking apiserver status ...
	I0819 17:16:05.415011   32917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:05.429888   32917 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	W0819 17:16:05.440531   32917 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:05.440575   32917 ssh_runner.go:195] Run: ls
	I0819 17:16:05.444334   32917 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:05.450381   32917 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:05.450402   32917 status.go:422] ha-227346-m03 apiserver status = Running (err=<nil>)
	I0819 17:16:05.450410   32917 status.go:257] ha-227346-m03 status: &{Name:ha-227346-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:05.450425   32917 status.go:255] checking status of ha-227346-m04 ...
	I0819 17:16:05.450784   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:05.450827   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:05.465744   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37227
	I0819 17:16:05.466159   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:05.466682   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:05.466704   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:05.467025   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:05.467243   32917 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:16:05.468798   32917 status.go:330] ha-227346-m04 host status = "Running" (err=<nil>)
	I0819 17:16:05.468815   32917 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:05.469082   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:05.469114   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:05.484196   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0819 17:16:05.484548   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:05.485031   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:05.485053   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:05.485342   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:05.485508   32917 main.go:141] libmachine: (ha-227346-m04) Calling .GetIP
	I0819 17:16:05.488503   32917 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:05.488977   32917 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:05.489004   32917 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:05.489117   32917 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:05.489418   32917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:05.489468   32917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:05.503905   32917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0819 17:16:05.504272   32917 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:05.504762   32917 main.go:141] libmachine: Using API Version  1
	I0819 17:16:05.504787   32917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:05.505090   32917 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:05.505266   32917 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:16:05.505444   32917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:05.505466   32917 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:16:05.508476   32917 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:05.508873   32917 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:05.508907   32917 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:05.509055   32917 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:16:05.509229   32917 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:16:05.509378   32917 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:16:05.509491   32917 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:16:05.583959   32917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:05.598733   32917 status.go:257] ha-227346-m04 status: &{Name:ha-227346-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr: exit status 3 (4.99491782s)

                                                
                                                
-- stdout --
	ha-227346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-227346-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:16:06.794838   33001 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:16:06.794964   33001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:06.794974   33001 out.go:358] Setting ErrFile to fd 2...
	I0819 17:16:06.794978   33001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:06.795216   33001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:16:06.795394   33001 out.go:352] Setting JSON to false
	I0819 17:16:06.795419   33001 mustload.go:65] Loading cluster: ha-227346
	I0819 17:16:06.795561   33001 notify.go:220] Checking for updates...
	I0819 17:16:06.795867   33001 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:16:06.795879   33001 status.go:255] checking status of ha-227346 ...
	I0819 17:16:06.796303   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:06.796366   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:06.812334   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40233
	I0819 17:16:06.812743   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:06.813251   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:06.813275   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:06.813647   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:06.813829   33001 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:16:06.815271   33001 status.go:330] ha-227346 host status = "Running" (err=<nil>)
	I0819 17:16:06.815286   33001 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:06.815568   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:06.815599   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:06.830344   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
	I0819 17:16:06.830760   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:06.831245   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:06.831271   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:06.831580   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:06.831747   33001 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:16:06.834535   33001 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:06.834964   33001 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:06.834990   33001 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:06.835132   33001 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:06.835537   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:06.835577   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:06.850212   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0819 17:16:06.850625   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:06.851105   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:06.851130   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:06.851451   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:06.851641   33001 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:16:06.851822   33001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:06.851855   33001 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:16:06.854696   33001 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:06.855132   33001 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:06.855168   33001 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:06.855300   33001 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:16:06.855470   33001 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:16:06.855622   33001 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:16:06.855769   33001 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:16:06.936832   33001 ssh_runner.go:195] Run: systemctl --version
	I0819 17:16:06.944305   33001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:06.958599   33001 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:06.958644   33001 api_server.go:166] Checking apiserver status ...
	I0819 17:16:06.958685   33001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:06.972592   33001 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0819 17:16:06.981249   33001 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:06.981305   33001 ssh_runner.go:195] Run: ls
	I0819 17:16:06.986216   33001 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:06.990348   33001 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:06.990378   33001 status.go:422] ha-227346 apiserver status = Running (err=<nil>)
	I0819 17:16:06.990390   33001 status.go:257] ha-227346 status: &{Name:ha-227346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:06.990421   33001 status.go:255] checking status of ha-227346-m02 ...
	I0819 17:16:06.990798   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:06.990841   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:07.005401   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0819 17:16:07.005821   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:07.006335   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:07.006359   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:07.006738   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:07.006984   33001 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:16:07.008631   33001 status.go:330] ha-227346-m02 host status = "Running" (err=<nil>)
	I0819 17:16:07.008648   33001 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:16:07.008987   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:07.009022   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:07.023582   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45655
	I0819 17:16:07.024027   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:07.024487   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:07.024507   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:07.024831   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:07.024990   33001 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:16:07.027724   33001 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:07.028139   33001 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:16:07.028178   33001 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:07.028310   33001 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:16:07.028653   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:07.028691   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:07.043215   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42439
	I0819 17:16:07.043600   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:07.044055   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:07.044074   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:07.044382   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:07.044564   33001 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:16:07.044773   33001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:07.044796   33001 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:16:07.047281   33001 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:07.047725   33001 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:16:07.047753   33001 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:07.047889   33001 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:16:07.048073   33001 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:16:07.048221   33001 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:16:07.048340   33001 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	W0819 17:16:08.333115   33001 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:08.333166   33001 retry.go:31] will retry after 211.251284ms: dial tcp 192.168.39.189:22: connect: no route to host
	W0819 17:16:11.405008   33001 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.189:22: connect: no route to host
	W0819 17:16:11.405119   33001 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	E0819 17:16:11.405144   33001 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:11.405153   33001 status.go:257] ha-227346-m02 status: &{Name:ha-227346-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 17:16:11.405182   33001 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:11.405193   33001 status.go:255] checking status of ha-227346-m03 ...
	I0819 17:16:11.405597   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:11.405639   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:11.420925   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43269
	I0819 17:16:11.421332   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:11.421752   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:11.421773   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:11.422052   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:11.422211   33001 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:16:11.423735   33001 status.go:330] ha-227346-m03 host status = "Running" (err=<nil>)
	I0819 17:16:11.423748   33001 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:11.424019   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:11.424049   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:11.438368   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0819 17:16:11.438787   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:11.439290   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:11.439314   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:11.439594   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:11.439762   33001 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:16:11.442206   33001 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:11.442614   33001 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:11.442644   33001 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:11.442791   33001 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:11.443153   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:11.443199   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:11.457214   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0819 17:16:11.457598   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:11.458008   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:11.458034   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:11.458305   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:11.458500   33001 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:16:11.458682   33001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:11.458701   33001 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:16:11.461272   33001 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:11.461713   33001 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:11.461746   33001 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:11.461852   33001 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:16:11.462009   33001 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:16:11.462149   33001 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:16:11.462296   33001 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:16:11.545503   33001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:11.560301   33001 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:11.560325   33001 api_server.go:166] Checking apiserver status ...
	I0819 17:16:11.560357   33001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:11.578766   33001 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	W0819 17:16:11.588037   33001 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:11.588093   33001 ssh_runner.go:195] Run: ls
	I0819 17:16:11.592246   33001 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:11.598043   33001 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:11.598064   33001 status.go:422] ha-227346-m03 apiserver status = Running (err=<nil>)
	I0819 17:16:11.598072   33001 status.go:257] ha-227346-m03 status: &{Name:ha-227346-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:11.598087   33001 status.go:255] checking status of ha-227346-m04 ...
	I0819 17:16:11.598365   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:11.598398   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:11.613771   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0819 17:16:11.614100   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:11.614562   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:11.614588   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:11.614907   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:11.615100   33001 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:16:11.616675   33001 status.go:330] ha-227346-m04 host status = "Running" (err=<nil>)
	I0819 17:16:11.616693   33001 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:11.617009   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:11.617045   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:11.631603   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42873
	I0819 17:16:11.631918   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:11.632315   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:11.632335   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:11.632598   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:11.632784   33001 main.go:141] libmachine: (ha-227346-m04) Calling .GetIP
	I0819 17:16:11.635321   33001 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:11.635678   33001 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:11.635703   33001 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:11.635787   33001 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:11.636044   33001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:11.636076   33001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:11.650332   33001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0819 17:16:11.650718   33001 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:11.651172   33001 main.go:141] libmachine: Using API Version  1
	I0819 17:16:11.651190   33001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:11.651509   33001 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:11.651726   33001 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:16:11.651928   33001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:11.651950   33001 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:16:11.654631   33001 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:11.655046   33001 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:11.655070   33001 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:11.655204   33001 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:16:11.655383   33001 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:16:11.655508   33001 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:16:11.655626   33001 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:16:11.731230   33001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:11.744485   33001 status.go:257] ha-227346-m04 status: &{Name:ha-227346-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr: exit status 3 (4.71764443s)

                                                
                                                
-- stdout --
	ha-227346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-227346-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:16:13.206324   33116 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:16:13.206463   33116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:13.206472   33116 out.go:358] Setting ErrFile to fd 2...
	I0819 17:16:13.206477   33116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:13.206657   33116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:16:13.206806   33116 out.go:352] Setting JSON to false
	I0819 17:16:13.206832   33116 mustload.go:65] Loading cluster: ha-227346
	I0819 17:16:13.206884   33116 notify.go:220] Checking for updates...
	I0819 17:16:13.207328   33116 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:16:13.207351   33116 status.go:255] checking status of ha-227346 ...
	I0819 17:16:13.207763   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:13.207811   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:13.226674   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0819 17:16:13.227037   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:13.227645   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:13.227679   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:13.228055   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:13.228272   33116 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:16:13.229857   33116 status.go:330] ha-227346 host status = "Running" (err=<nil>)
	I0819 17:16:13.229874   33116 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:13.230144   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:13.230177   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:13.244239   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0819 17:16:13.244596   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:13.245010   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:13.245028   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:13.245376   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:13.245562   33116 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:16:13.248268   33116 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:13.248738   33116 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:13.248783   33116 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:13.248906   33116 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:13.249260   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:13.249311   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:13.263548   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34465
	I0819 17:16:13.263935   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:13.264367   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:13.264391   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:13.264730   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:13.264895   33116 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:16:13.265077   33116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:13.265103   33116 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:16:13.267784   33116 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:13.268160   33116 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:13.268180   33116 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:13.268316   33116 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:16:13.268487   33116 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:16:13.268623   33116 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:16:13.268742   33116 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:16:13.348651   33116 ssh_runner.go:195] Run: systemctl --version
	I0819 17:16:13.354445   33116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:13.369105   33116 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:13.369148   33116 api_server.go:166] Checking apiserver status ...
	I0819 17:16:13.369208   33116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:13.383227   33116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0819 17:16:13.393050   33116 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:13.393098   33116 ssh_runner.go:195] Run: ls
	I0819 17:16:13.396952   33116 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:13.400935   33116 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:13.400957   33116 status.go:422] ha-227346 apiserver status = Running (err=<nil>)
	I0819 17:16:13.400965   33116 status.go:257] ha-227346 status: &{Name:ha-227346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:13.400992   33116 status.go:255] checking status of ha-227346-m02 ...
	I0819 17:16:13.401309   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:13.401344   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:13.416970   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I0819 17:16:13.417466   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:13.417934   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:13.417958   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:13.418312   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:13.418501   33116 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:16:13.420065   33116 status.go:330] ha-227346-m02 host status = "Running" (err=<nil>)
	I0819 17:16:13.420083   33116 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:16:13.420487   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:13.420535   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:13.435295   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I0819 17:16:13.435715   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:13.436177   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:13.436199   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:13.436523   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:13.436709   33116 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:16:13.439670   33116 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:13.440091   33116 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:16:13.440127   33116 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:13.440217   33116 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:16:13.440534   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:13.440573   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:13.454875   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
	I0819 17:16:13.455318   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:13.455786   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:13.455806   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:13.456082   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:13.456247   33116 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:16:13.456422   33116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:13.456440   33116 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:16:13.459062   33116 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:13.459434   33116 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:16:13.459458   33116 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:13.459630   33116 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:16:13.459787   33116 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:16:13.459930   33116 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:16:13.460054   33116 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	W0819 17:16:14.477044   33116 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:14.477111   33116 retry.go:31] will retry after 175.62922ms: dial tcp 192.168.39.189:22: connect: no route to host
	W0819 17:16:17.548994   33116 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.189:22: connect: no route to host
	W0819 17:16:17.549091   33116 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	E0819 17:16:17.549114   33116 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:17.549124   33116 status.go:257] ha-227346-m02 status: &{Name:ha-227346-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 17:16:17.549172   33116 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:17.549182   33116 status.go:255] checking status of ha-227346-m03 ...
	I0819 17:16:17.549658   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:17.549733   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:17.564504   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39689
	I0819 17:16:17.564936   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:17.565334   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:17.565351   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:17.565669   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:17.565844   33116 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:16:17.567286   33116 status.go:330] ha-227346-m03 host status = "Running" (err=<nil>)
	I0819 17:16:17.567302   33116 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:17.567659   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:17.567699   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:17.582245   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44469
	I0819 17:16:17.582616   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:17.583020   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:17.583037   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:17.583354   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:17.583520   33116 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:16:17.586168   33116 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:17.586581   33116 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:17.586603   33116 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:17.586750   33116 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:17.587053   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:17.587095   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:17.601911   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0819 17:16:17.602301   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:17.602797   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:17.602819   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:17.603090   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:17.603280   33116 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:16:17.603439   33116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:17.603460   33116 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:16:17.606212   33116 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:17.606574   33116 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:17.606609   33116 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:17.606740   33116 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:16:17.606906   33116 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:16:17.607041   33116 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:16:17.607171   33116 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:16:17.684097   33116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:17.699049   33116 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:17.699075   33116 api_server.go:166] Checking apiserver status ...
	I0819 17:16:17.699106   33116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:17.712317   33116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	W0819 17:16:17.721081   33116 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:17.721126   33116 ssh_runner.go:195] Run: ls
	I0819 17:16:17.725095   33116 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:17.730900   33116 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:17.730925   33116 status.go:422] ha-227346-m03 apiserver status = Running (err=<nil>)
	I0819 17:16:17.730936   33116 status.go:257] ha-227346-m03 status: &{Name:ha-227346-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:17.730956   33116 status.go:255] checking status of ha-227346-m04 ...
	I0819 17:16:17.731492   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:17.731537   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:17.746218   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I0819 17:16:17.746584   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:17.747074   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:17.747098   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:17.747382   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:17.747547   33116 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:16:17.749299   33116 status.go:330] ha-227346-m04 host status = "Running" (err=<nil>)
	I0819 17:16:17.749312   33116 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:17.749615   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:17.749651   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:17.765289   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46689
	I0819 17:16:17.765650   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:17.766054   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:17.766072   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:17.766365   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:17.766544   33116 main.go:141] libmachine: (ha-227346-m04) Calling .GetIP
	I0819 17:16:17.769242   33116 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:17.769594   33116 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:17.769632   33116 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:17.769763   33116 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:17.770040   33116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:17.770072   33116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:17.785618   33116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I0819 17:16:17.786036   33116 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:17.786498   33116 main.go:141] libmachine: Using API Version  1
	I0819 17:16:17.786518   33116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:17.786810   33116 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:17.786987   33116 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:16:17.787164   33116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:17.787186   33116 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:16:17.789933   33116 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:17.790520   33116 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:17.790544   33116 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:17.790733   33116 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:16:17.790881   33116 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:16:17.791013   33116 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:16:17.791151   33116 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:16:17.867846   33116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:17.882271   33116 status.go:257] ha-227346-m04 status: &{Name:ha-227346-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr: exit status 3 (4.447929928s)

                                                
                                                
-- stdout --
	ha-227346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-227346-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:16:19.954086   33218 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:16:19.954310   33218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:19.954319   33218 out.go:358] Setting ErrFile to fd 2...
	I0819 17:16:19.954323   33218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:19.954484   33218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:16:19.954636   33218 out.go:352] Setting JSON to false
	I0819 17:16:19.954661   33218 mustload.go:65] Loading cluster: ha-227346
	I0819 17:16:19.954710   33218 notify.go:220] Checking for updates...
	I0819 17:16:19.955194   33218 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:16:19.955215   33218 status.go:255] checking status of ha-227346 ...
	I0819 17:16:19.955738   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:19.955788   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:19.971209   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41563
	I0819 17:16:19.971621   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:19.972289   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:19.972328   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:19.972736   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:19.972930   33218 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:16:19.974603   33218 status.go:330] ha-227346 host status = "Running" (err=<nil>)
	I0819 17:16:19.974620   33218 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:19.974977   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:19.975040   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:19.990617   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33353
	I0819 17:16:19.991048   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:19.991471   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:19.991492   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:19.991747   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:19.991921   33218 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:16:19.994722   33218 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:19.995114   33218 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:19.995141   33218 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:19.995266   33218 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:19.995673   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:19.995712   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:20.011552   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I0819 17:16:20.011928   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:20.012466   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:20.012490   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:20.012806   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:20.012968   33218 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:16:20.013152   33218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:20.013182   33218 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:16:20.015659   33218 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:20.016035   33218 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:20.016070   33218 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:20.016210   33218 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:16:20.016381   33218 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:16:20.016527   33218 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:16:20.016680   33218 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:16:20.096818   33218 ssh_runner.go:195] Run: systemctl --version
	I0819 17:16:20.102369   33218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:20.116797   33218 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:20.116828   33218 api_server.go:166] Checking apiserver status ...
	I0819 17:16:20.116867   33218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:20.129746   33218 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0819 17:16:20.139294   33218 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:20.139358   33218 ssh_runner.go:195] Run: ls
	I0819 17:16:20.143502   33218 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:20.147427   33218 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:20.147444   33218 status.go:422] ha-227346 apiserver status = Running (err=<nil>)
	I0819 17:16:20.147452   33218 status.go:257] ha-227346 status: &{Name:ha-227346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:20.147467   33218 status.go:255] checking status of ha-227346-m02 ...
	I0819 17:16:20.147743   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:20.147779   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:20.162500   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38651
	I0819 17:16:20.162960   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:20.163466   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:20.163484   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:20.163751   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:20.163920   33218 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:16:20.165357   33218 status.go:330] ha-227346-m02 host status = "Running" (err=<nil>)
	I0819 17:16:20.165374   33218 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:16:20.165667   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:20.165702   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:20.180584   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39925
	I0819 17:16:20.180957   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:20.181384   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:20.181401   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:20.181663   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:20.181832   33218 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:16:20.184243   33218 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:20.184634   33218 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:16:20.184660   33218 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:20.184793   33218 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:16:20.185092   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:20.185122   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:20.199758   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43357
	I0819 17:16:20.200112   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:20.200538   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:20.200559   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:20.200855   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:20.201007   33218 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:16:20.201147   33218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:20.201166   33218 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:16:20.203371   33218 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:20.203747   33218 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:16:20.203771   33218 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:20.203883   33218 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:16:20.204029   33218 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:16:20.204148   33218 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:16:20.204247   33218 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	W0819 17:16:20.621007   33218 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:20.621058   33218 retry.go:31] will retry after 325.21062ms: dial tcp 192.168.39.189:22: connect: no route to host
	W0819 17:16:24.013035   33218 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.189:22: connect: no route to host
	W0819 17:16:24.013153   33218 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	E0819 17:16:24.013177   33218 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:24.013185   33218 status.go:257] ha-227346-m02 status: &{Name:ha-227346-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 17:16:24.013201   33218 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:24.013209   33218 status.go:255] checking status of ha-227346-m03 ...
	I0819 17:16:24.013528   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:24.013575   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:24.028858   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40285
	I0819 17:16:24.029298   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:24.029836   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:24.029859   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:24.030196   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:24.030381   33218 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:16:24.032089   33218 status.go:330] ha-227346-m03 host status = "Running" (err=<nil>)
	I0819 17:16:24.032105   33218 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:24.032388   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:24.032468   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:24.047983   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0819 17:16:24.048397   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:24.048888   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:24.048909   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:24.049217   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:24.049382   33218 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:16:24.052151   33218 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:24.052614   33218 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:24.052650   33218 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:24.052797   33218 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:24.053190   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:24.053226   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:24.068870   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41959
	I0819 17:16:24.069345   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:24.069803   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:24.069824   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:24.070166   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:24.070376   33218 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:16:24.070576   33218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:24.070596   33218 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:16:24.073486   33218 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:24.073894   33218 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:24.073918   33218 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:24.074072   33218 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:16:24.074250   33218 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:16:24.074397   33218 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:16:24.074538   33218 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:16:24.155967   33218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:24.171142   33218 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:24.171172   33218 api_server.go:166] Checking apiserver status ...
	I0819 17:16:24.171210   33218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:24.186030   33218 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	W0819 17:16:24.195517   33218 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:24.195589   33218 ssh_runner.go:195] Run: ls
	I0819 17:16:24.199837   33218 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:24.205938   33218 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:24.205962   33218 status.go:422] ha-227346-m03 apiserver status = Running (err=<nil>)
	I0819 17:16:24.205970   33218 status.go:257] ha-227346-m03 status: &{Name:ha-227346-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:24.205985   33218 status.go:255] checking status of ha-227346-m04 ...
	I0819 17:16:24.206282   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:24.206323   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:24.221363   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45555
	I0819 17:16:24.221792   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:24.222293   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:24.222315   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:24.222682   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:24.222931   33218 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:16:24.224686   33218 status.go:330] ha-227346-m04 host status = "Running" (err=<nil>)
	I0819 17:16:24.224711   33218 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:24.225038   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:24.225079   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:24.240273   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37993
	I0819 17:16:24.240660   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:24.241145   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:24.241166   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:24.241498   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:24.241685   33218 main.go:141] libmachine: (ha-227346-m04) Calling .GetIP
	I0819 17:16:24.244934   33218 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:24.245412   33218 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:24.245439   33218 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:24.245711   33218 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:24.246091   33218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:24.246132   33218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:24.261098   33218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
	I0819 17:16:24.261545   33218 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:24.261971   33218 main.go:141] libmachine: Using API Version  1
	I0819 17:16:24.261990   33218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:24.262246   33218 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:24.262434   33218 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:16:24.262652   33218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:24.262679   33218 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:16:24.265481   33218 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:24.265858   33218 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:24.265879   33218 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:24.266050   33218 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:16:24.266218   33218 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:16:24.266345   33218 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:16:24.266480   33218 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:16:24.344009   33218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:24.358551   33218 status.go:257] ha-227346-m04 status: &{Name:ha-227346-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr: exit status 3 (3.703851515s)

                                                
                                                
-- stdout --
	ha-227346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-227346-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:16:28.051528   33337 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:16:28.051760   33337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:28.051768   33337 out.go:358] Setting ErrFile to fd 2...
	I0819 17:16:28.051772   33337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:28.051945   33337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:16:28.052087   33337 out.go:352] Setting JSON to false
	I0819 17:16:28.052109   33337 mustload.go:65] Loading cluster: ha-227346
	I0819 17:16:28.052150   33337 notify.go:220] Checking for updates...
	I0819 17:16:28.052545   33337 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:16:28.052560   33337 status.go:255] checking status of ha-227346 ...
	I0819 17:16:28.053046   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:28.053087   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:28.072950   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0819 17:16:28.073375   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:28.074003   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:28.074028   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:28.074384   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:28.074581   33337 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:16:28.076164   33337 status.go:330] ha-227346 host status = "Running" (err=<nil>)
	I0819 17:16:28.076177   33337 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:28.076449   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:28.076483   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:28.091666   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35747
	I0819 17:16:28.092024   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:28.092430   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:28.092450   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:28.092817   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:28.092992   33337 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:16:28.095857   33337 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:28.096274   33337 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:28.096304   33337 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:28.096438   33337 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:28.096781   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:28.096820   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:28.110908   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44119
	I0819 17:16:28.111342   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:28.111819   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:28.111837   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:28.112107   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:28.112263   33337 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:16:28.112478   33337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:28.112504   33337 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:16:28.115144   33337 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:28.115594   33337 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:28.115615   33337 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:28.115730   33337 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:16:28.115886   33337 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:16:28.116033   33337 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:16:28.116191   33337 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:16:28.196075   33337 ssh_runner.go:195] Run: systemctl --version
	I0819 17:16:28.201545   33337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:28.215108   33337 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:28.215139   33337 api_server.go:166] Checking apiserver status ...
	I0819 17:16:28.215183   33337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:28.227238   33337 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0819 17:16:28.235520   33337 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:28.235583   33337 ssh_runner.go:195] Run: ls
	I0819 17:16:28.239200   33337 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:28.243172   33337 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:28.243191   33337 status.go:422] ha-227346 apiserver status = Running (err=<nil>)
	I0819 17:16:28.243202   33337 status.go:257] ha-227346 status: &{Name:ha-227346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:28.243225   33337 status.go:255] checking status of ha-227346-m02 ...
	I0819 17:16:28.243502   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:28.243543   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:28.258829   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0819 17:16:28.259179   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:28.259598   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:28.259617   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:28.259904   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:28.260080   33337 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:16:28.261661   33337 status.go:330] ha-227346-m02 host status = "Running" (err=<nil>)
	I0819 17:16:28.261676   33337 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:16:28.261940   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:28.261984   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:28.276377   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41115
	I0819 17:16:28.276716   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:28.277210   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:28.277234   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:28.277567   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:28.277751   33337 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:16:28.280256   33337 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:28.280672   33337 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:16:28.280698   33337 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:28.280832   33337 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:16:28.281120   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:28.281152   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:28.295617   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I0819 17:16:28.296027   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:28.296537   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:28.296557   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:28.296834   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:28.296962   33337 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:16:28.297134   33337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:28.297152   33337 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:16:28.299477   33337 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:28.299787   33337 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:16:28.299826   33337 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:16:28.299929   33337 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:16:28.300050   33337 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:16:28.300223   33337 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:16:28.300325   33337 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	W0819 17:16:31.376985   33337 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.189:22: connect: no route to host
	W0819 17:16:31.377104   33337 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	E0819 17:16:31.377127   33337 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:31.377140   33337 status.go:257] ha-227346-m02 status: &{Name:ha-227346-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0819 17:16:31.377173   33337 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.189:22: connect: no route to host
	I0819 17:16:31.377181   33337 status.go:255] checking status of ha-227346-m03 ...
	I0819 17:16:31.377546   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:31.377590   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:31.392898   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0819 17:16:31.393282   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:31.393697   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:31.393719   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:31.394090   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:31.394293   33337 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:16:31.395866   33337 status.go:330] ha-227346-m03 host status = "Running" (err=<nil>)
	I0819 17:16:31.395883   33337 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:31.396183   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:31.396227   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:31.411232   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0819 17:16:31.411586   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:31.412064   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:31.412084   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:31.412363   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:31.412547   33337 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:16:31.415009   33337 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:31.415405   33337 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:31.415590   33337 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:31.415658   33337 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:31.415942   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:31.415982   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:31.431264   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I0819 17:16:31.431655   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:31.432106   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:31.432126   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:31.432474   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:31.432667   33337 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:16:31.432877   33337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:31.432901   33337 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:16:31.435819   33337 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:31.436228   33337 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:31.436254   33337 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:31.436405   33337 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:16:31.436596   33337 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:16:31.436746   33337 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:16:31.436918   33337 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:16:31.515825   33337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:31.529015   33337 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:31.529041   33337 api_server.go:166] Checking apiserver status ...
	I0819 17:16:31.529071   33337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:31.542594   33337 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	W0819 17:16:31.552610   33337 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:31.552677   33337 ssh_runner.go:195] Run: ls
	I0819 17:16:31.557186   33337 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:31.562759   33337 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:31.562784   33337 status.go:422] ha-227346-m03 apiserver status = Running (err=<nil>)
	I0819 17:16:31.562793   33337 status.go:257] ha-227346-m03 status: &{Name:ha-227346-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:31.562807   33337 status.go:255] checking status of ha-227346-m04 ...
	I0819 17:16:31.563117   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:31.563152   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:31.578933   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I0819 17:16:31.579269   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:31.579705   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:31.579726   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:31.580059   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:31.580267   33337 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:16:31.581675   33337 status.go:330] ha-227346-m04 host status = "Running" (err=<nil>)
	I0819 17:16:31.581691   33337 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:31.581983   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:31.582023   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:31.596419   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43357
	I0819 17:16:31.596859   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:31.597315   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:31.597338   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:31.597718   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:31.597921   33337 main.go:141] libmachine: (ha-227346-m04) Calling .GetIP
	I0819 17:16:31.601090   33337 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:31.601652   33337 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:31.601676   33337 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:31.601887   33337 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:31.602519   33337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:31.602636   33337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:31.617294   33337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43535
	I0819 17:16:31.617725   33337 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:31.618140   33337 main.go:141] libmachine: Using API Version  1
	I0819 17:16:31.618153   33337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:31.618470   33337 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:31.618612   33337 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:16:31.618785   33337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:31.618801   33337 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:16:31.621296   33337 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:31.621670   33337 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:31.621705   33337 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:31.621887   33337 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:16:31.622038   33337 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:16:31.622215   33337 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:16:31.622364   33337 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:16:31.700555   33337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:31.715235   33337 status.go:257] ha-227346-m04 status: &{Name:ha-227346-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr: exit status 7 (807.103202ms)

                                                
                                                
-- stdout --
	ha-227346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-227346-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:16:38.785367   33460 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:16:38.785501   33460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:38.785513   33460 out.go:358] Setting ErrFile to fd 2...
	I0819 17:16:38.785520   33460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:38.785730   33460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:16:38.785916   33460 out.go:352] Setting JSON to false
	I0819 17:16:38.785949   33460 mustload.go:65] Loading cluster: ha-227346
	I0819 17:16:38.786046   33460 notify.go:220] Checking for updates...
	I0819 17:16:38.786413   33460 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:16:38.786432   33460 status.go:255] checking status of ha-227346 ...
	I0819 17:16:38.786844   33460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:38.786920   33460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:38.801910   33460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0819 17:16:38.802374   33460 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:38.802872   33460 main.go:141] libmachine: Using API Version  1
	I0819 17:16:38.802892   33460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:38.803262   33460 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:38.803490   33460 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:16:38.807528   33460 status.go:330] ha-227346 host status = "Running" (err=<nil>)
	I0819 17:16:38.807550   33460 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:38.807920   33460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:38.807966   33460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:38.822362   33460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35725
	I0819 17:16:38.822840   33460 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:38.823353   33460 main.go:141] libmachine: Using API Version  1
	I0819 17:16:38.823380   33460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:38.823725   33460 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:38.823950   33460 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:16:38.826838   33460 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:38.827282   33460 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:38.827302   33460 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:38.827428   33460 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:38.827705   33460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:38.827752   33460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:38.844186   33460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36059
	I0819 17:16:38.844601   33460 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:38.845128   33460 main.go:141] libmachine: Using API Version  1
	I0819 17:16:38.845153   33460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:38.845436   33460 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:38.845613   33460 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:16:38.845806   33460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:38.845834   33460 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:16:38.848391   33460 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:38.848872   33460 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:38.848895   33460 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:38.849007   33460 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:16:38.849168   33460 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:16:38.849311   33460 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:16:38.849464   33460 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:16:38.937658   33460 ssh_runner.go:195] Run: systemctl --version
	I0819 17:16:38.946686   33460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:38.961266   33460 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:38.961295   33460 api_server.go:166] Checking apiserver status ...
	I0819 17:16:38.961331   33460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:38.975067   33460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0819 17:16:38.985529   33460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:38.985583   33460 ssh_runner.go:195] Run: ls
	I0819 17:16:38.989520   33460 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:38.994680   33460 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:38.994710   33460 status.go:422] ha-227346 apiserver status = Running (err=<nil>)
	I0819 17:16:38.994722   33460 status.go:257] ha-227346 status: &{Name:ha-227346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:38.994744   33460 status.go:255] checking status of ha-227346-m02 ...
	I0819 17:16:38.995052   33460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:38.995103   33460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:39.009935   33460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I0819 17:16:39.010430   33460 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:39.010941   33460 main.go:141] libmachine: Using API Version  1
	I0819 17:16:39.010961   33460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:39.011271   33460 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:39.011487   33460 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:16:39.207988   33460 status.go:330] ha-227346-m02 host status = "Stopped" (err=<nil>)
	I0819 17:16:39.208007   33460 status.go:343] host is not running, skipping remaining checks
	I0819 17:16:39.208013   33460 status.go:257] ha-227346-m02 status: &{Name:ha-227346-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:39.208029   33460 status.go:255] checking status of ha-227346-m03 ...
	I0819 17:16:39.208316   33460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:39.208356   33460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:39.223170   33460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0819 17:16:39.223665   33460 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:39.224137   33460 main.go:141] libmachine: Using API Version  1
	I0819 17:16:39.224154   33460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:39.224465   33460 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:39.224672   33460 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:16:39.226189   33460 status.go:330] ha-227346-m03 host status = "Running" (err=<nil>)
	I0819 17:16:39.226206   33460 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:39.226497   33460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:39.226546   33460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:39.241387   33460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0819 17:16:39.241808   33460 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:39.242273   33460 main.go:141] libmachine: Using API Version  1
	I0819 17:16:39.242290   33460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:39.242597   33460 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:39.242757   33460 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:16:39.245705   33460 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:39.246133   33460 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:39.246156   33460 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:39.246290   33460 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:39.246631   33460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:39.246678   33460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:39.262321   33460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I0819 17:16:39.262740   33460 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:39.263221   33460 main.go:141] libmachine: Using API Version  1
	I0819 17:16:39.263252   33460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:39.263618   33460 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:39.263794   33460 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:16:39.263970   33460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:39.263986   33460 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:16:39.266998   33460 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:39.267406   33460 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:39.267433   33460 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:39.267638   33460 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:16:39.267805   33460 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:16:39.267946   33460 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:16:39.268068   33460 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:16:39.348341   33460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:39.363651   33460 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:39.363678   33460 api_server.go:166] Checking apiserver status ...
	I0819 17:16:39.363709   33460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:39.377875   33460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	W0819 17:16:39.387262   33460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:39.387322   33460 ssh_runner.go:195] Run: ls
	I0819 17:16:39.391715   33460 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:39.396013   33460 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:39.396033   33460 status.go:422] ha-227346-m03 apiserver status = Running (err=<nil>)
	I0819 17:16:39.396041   33460 status.go:257] ha-227346-m03 status: &{Name:ha-227346-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:39.396053   33460 status.go:255] checking status of ha-227346-m04 ...
	I0819 17:16:39.396326   33460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:39.396389   33460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:39.411982   33460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42451
	I0819 17:16:39.412425   33460 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:39.412891   33460 main.go:141] libmachine: Using API Version  1
	I0819 17:16:39.412912   33460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:39.413237   33460 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:39.413425   33460 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:16:39.414863   33460 status.go:330] ha-227346-m04 host status = "Running" (err=<nil>)
	I0819 17:16:39.414875   33460 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:39.415172   33460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:39.415203   33460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:39.429557   33460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	I0819 17:16:39.429914   33460 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:39.430381   33460 main.go:141] libmachine: Using API Version  1
	I0819 17:16:39.430400   33460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:39.430707   33460 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:39.430866   33460 main.go:141] libmachine: (ha-227346-m04) Calling .GetIP
	I0819 17:16:39.433616   33460 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:39.434013   33460 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:39.434035   33460 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:39.434275   33460 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:39.434569   33460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:39.434602   33460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:39.449355   33460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43043
	I0819 17:16:39.449783   33460 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:39.450143   33460 main.go:141] libmachine: Using API Version  1
	I0819 17:16:39.450168   33460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:39.450485   33460 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:39.450639   33460 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:16:39.450820   33460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:39.450841   33460 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:16:39.453225   33460 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:39.453605   33460 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:39.453639   33460 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:39.453772   33460 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:16:39.453917   33460 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:16:39.454065   33460 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:16:39.454213   33460 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:16:39.531622   33460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:39.546566   33460 status.go:257] ha-227346-m04 status: &{Name:ha-227346-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr: exit status 7 (596.478334ms)

                                                
                                                
-- stdout --
	ha-227346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-227346-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:16:48.668472   33580 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:16:48.668596   33580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:48.668607   33580 out.go:358] Setting ErrFile to fd 2...
	I0819 17:16:48.668614   33580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:16:48.668817   33580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:16:48.668968   33580 out.go:352] Setting JSON to false
	I0819 17:16:48.668992   33580 mustload.go:65] Loading cluster: ha-227346
	I0819 17:16:48.669109   33580 notify.go:220] Checking for updates...
	I0819 17:16:48.669481   33580 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:16:48.669503   33580 status.go:255] checking status of ha-227346 ...
	I0819 17:16:48.670095   33580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:48.670141   33580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:48.695445   33580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42197
	I0819 17:16:48.695848   33580 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:48.696379   33580 main.go:141] libmachine: Using API Version  1
	I0819 17:16:48.696400   33580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:48.696712   33580 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:48.696906   33580 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:16:48.698511   33580 status.go:330] ha-227346 host status = "Running" (err=<nil>)
	I0819 17:16:48.698524   33580 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:48.698797   33580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:48.698826   33580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:48.713310   33580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41241
	I0819 17:16:48.713683   33580 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:48.714179   33580 main.go:141] libmachine: Using API Version  1
	I0819 17:16:48.714223   33580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:48.714529   33580 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:48.714727   33580 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:16:48.717313   33580 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:48.717733   33580 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:48.717774   33580 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:48.717903   33580 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:16:48.718169   33580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:48.718208   33580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:48.732455   33580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I0819 17:16:48.732775   33580 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:48.733217   33580 main.go:141] libmachine: Using API Version  1
	I0819 17:16:48.733239   33580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:48.733522   33580 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:48.733709   33580 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:16:48.733885   33580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:48.733905   33580 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:16:48.736431   33580 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:48.736850   33580 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:16:48.736867   33580 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:16:48.737024   33580 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:16:48.737207   33580 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:16:48.737326   33580 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:16:48.737460   33580 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:16:48.816264   33580 ssh_runner.go:195] Run: systemctl --version
	I0819 17:16:48.822062   33580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:48.836210   33580 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:48.836242   33580 api_server.go:166] Checking apiserver status ...
	I0819 17:16:48.836273   33580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:48.850160   33580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0819 17:16:48.860108   33580 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:48.860149   33580 ssh_runner.go:195] Run: ls
	I0819 17:16:48.864605   33580 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:48.871097   33580 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:48.871119   33580 status.go:422] ha-227346 apiserver status = Running (err=<nil>)
	I0819 17:16:48.871131   33580 status.go:257] ha-227346 status: &{Name:ha-227346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:48.871149   33580 status.go:255] checking status of ha-227346-m02 ...
	I0819 17:16:48.871618   33580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:48.871668   33580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:48.886389   33580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0819 17:16:48.886822   33580 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:48.887257   33580 main.go:141] libmachine: Using API Version  1
	I0819 17:16:48.887272   33580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:48.887557   33580 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:48.887847   33580 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:16:48.889501   33580 status.go:330] ha-227346-m02 host status = "Stopped" (err=<nil>)
	I0819 17:16:48.889517   33580 status.go:343] host is not running, skipping remaining checks
	I0819 17:16:48.889524   33580 status.go:257] ha-227346-m02 status: &{Name:ha-227346-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:48.889542   33580 status.go:255] checking status of ha-227346-m03 ...
	I0819 17:16:48.889832   33580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:48.889872   33580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:48.905531   33580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0819 17:16:48.905869   33580 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:48.906251   33580 main.go:141] libmachine: Using API Version  1
	I0819 17:16:48.906270   33580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:48.906632   33580 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:48.906804   33580 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:16:48.908356   33580 status.go:330] ha-227346-m03 host status = "Running" (err=<nil>)
	I0819 17:16:48.908370   33580 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:48.908654   33580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:48.908683   33580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:48.923261   33580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44061
	I0819 17:16:48.923642   33580 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:48.924073   33580 main.go:141] libmachine: Using API Version  1
	I0819 17:16:48.924092   33580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:48.924385   33580 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:48.924542   33580 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:16:48.927154   33580 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:48.927563   33580 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:48.927586   33580 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:48.927666   33580 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:16:48.927970   33580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:48.928007   33580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:48.942248   33580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0819 17:16:48.942679   33580 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:48.943136   33580 main.go:141] libmachine: Using API Version  1
	I0819 17:16:48.943155   33580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:48.943643   33580 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:48.943852   33580 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:16:48.944059   33580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:48.944084   33580 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:16:48.947145   33580 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:48.947636   33580 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:16:48.947661   33580 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:16:48.947753   33580 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:16:48.947910   33580 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:16:48.948056   33580 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:16:48.948162   33580 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:16:49.028474   33580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:49.042599   33580 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:16:49.042624   33580 api_server.go:166] Checking apiserver status ...
	I0819 17:16:49.042661   33580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:16:49.058903   33580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	W0819 17:16:49.068690   33580 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:16:49.068764   33580 ssh_runner.go:195] Run: ls
	I0819 17:16:49.073282   33580 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:16:49.077486   33580 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:16:49.077505   33580 status.go:422] ha-227346-m03 apiserver status = Running (err=<nil>)
	I0819 17:16:49.077512   33580 status.go:257] ha-227346-m03 status: &{Name:ha-227346-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:16:49.077526   33580 status.go:255] checking status of ha-227346-m04 ...
	I0819 17:16:49.077790   33580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:49.077823   33580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:49.092577   33580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I0819 17:16:49.092996   33580 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:49.093486   33580 main.go:141] libmachine: Using API Version  1
	I0819 17:16:49.093512   33580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:49.093812   33580 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:49.094019   33580 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:16:49.095563   33580 status.go:330] ha-227346-m04 host status = "Running" (err=<nil>)
	I0819 17:16:49.095583   33580 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:49.095857   33580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:49.095886   33580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:49.110209   33580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I0819 17:16:49.110568   33580 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:49.111023   33580 main.go:141] libmachine: Using API Version  1
	I0819 17:16:49.111045   33580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:49.111372   33580 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:49.111591   33580 main.go:141] libmachine: (ha-227346-m04) Calling .GetIP
	I0819 17:16:49.114296   33580 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:49.114694   33580 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:49.114714   33580 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:49.114852   33580 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:16:49.115136   33580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:16:49.115182   33580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:16:49.129579   33580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I0819 17:16:49.130014   33580 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:16:49.130481   33580 main.go:141] libmachine: Using API Version  1
	I0819 17:16:49.130503   33580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:16:49.130779   33580 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:16:49.130989   33580 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:16:49.131171   33580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:16:49.131192   33580 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:16:49.133862   33580 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:49.134266   33580 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:16:49.134293   33580 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:16:49.134410   33580 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:16:49.134574   33580 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:16:49.134733   33580 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:16:49.134889   33580 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:16:49.212173   33580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:16:49.225900   33580 status.go:257] ha-227346-m04 status: &{Name:ha-227346-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr: exit status 7 (605.221053ms)

                                                
                                                
-- stdout --
	ha-227346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-227346-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:17:01.728637   33691 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:17:01.728809   33691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:17:01.728822   33691 out.go:358] Setting ErrFile to fd 2...
	I0819 17:17:01.728829   33691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:17:01.728992   33691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:17:01.729144   33691 out.go:352] Setting JSON to false
	I0819 17:17:01.729166   33691 mustload.go:65] Loading cluster: ha-227346
	I0819 17:17:01.729212   33691 notify.go:220] Checking for updates...
	I0819 17:17:01.729516   33691 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:17:01.729529   33691 status.go:255] checking status of ha-227346 ...
	I0819 17:17:01.729848   33691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:01.729901   33691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:01.749064   33691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0819 17:17:01.749517   33691 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:01.750034   33691 main.go:141] libmachine: Using API Version  1
	I0819 17:17:01.750054   33691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:01.750382   33691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:01.750645   33691 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:17:01.752146   33691 status.go:330] ha-227346 host status = "Running" (err=<nil>)
	I0819 17:17:01.752160   33691 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:17:01.752472   33691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:01.752510   33691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:01.766740   33691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I0819 17:17:01.767150   33691 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:01.767650   33691 main.go:141] libmachine: Using API Version  1
	I0819 17:17:01.767670   33691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:01.767962   33691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:01.768123   33691 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:17:01.770952   33691 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:17:01.771378   33691 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:17:01.771406   33691 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:17:01.771571   33691 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:17:01.772017   33691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:01.772064   33691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:01.787615   33691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46033
	I0819 17:17:01.788043   33691 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:01.788509   33691 main.go:141] libmachine: Using API Version  1
	I0819 17:17:01.788531   33691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:01.788894   33691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:01.789043   33691 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:17:01.789226   33691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:17:01.789258   33691 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:17:01.791687   33691 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:17:01.792107   33691 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:17:01.792146   33691 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:17:01.792241   33691 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:17:01.792462   33691 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:17:01.792607   33691 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:17:01.792796   33691 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:17:01.871777   33691 ssh_runner.go:195] Run: systemctl --version
	I0819 17:17:01.877256   33691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:17:01.892782   33691 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:17:01.892813   33691 api_server.go:166] Checking apiserver status ...
	I0819 17:17:01.892846   33691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:17:01.914042   33691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup
	W0819 17:17:01.928989   33691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1138/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:17:01.929062   33691 ssh_runner.go:195] Run: ls
	I0819 17:17:01.933764   33691 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:17:01.938523   33691 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:17:01.938544   33691 status.go:422] ha-227346 apiserver status = Running (err=<nil>)
	I0819 17:17:01.938553   33691 status.go:257] ha-227346 status: &{Name:ha-227346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:17:01.938568   33691 status.go:255] checking status of ha-227346-m02 ...
	I0819 17:17:01.938847   33691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:01.938878   33691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:01.954660   33691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I0819 17:17:01.955112   33691 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:01.955612   33691 main.go:141] libmachine: Using API Version  1
	I0819 17:17:01.955633   33691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:01.955928   33691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:01.956120   33691 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:17:01.957838   33691 status.go:330] ha-227346-m02 host status = "Stopped" (err=<nil>)
	I0819 17:17:01.957850   33691 status.go:343] host is not running, skipping remaining checks
	I0819 17:17:01.957856   33691 status.go:257] ha-227346-m02 status: &{Name:ha-227346-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:17:01.957870   33691 status.go:255] checking status of ha-227346-m03 ...
	I0819 17:17:01.958211   33691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:01.958253   33691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:01.973513   33691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0819 17:17:01.973908   33691 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:01.974384   33691 main.go:141] libmachine: Using API Version  1
	I0819 17:17:01.974409   33691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:01.974717   33691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:01.974904   33691 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:17:01.976335   33691 status.go:330] ha-227346-m03 host status = "Running" (err=<nil>)
	I0819 17:17:01.976355   33691 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:17:01.976642   33691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:01.976679   33691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:01.991723   33691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I0819 17:17:01.992155   33691 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:01.992579   33691 main.go:141] libmachine: Using API Version  1
	I0819 17:17:01.992607   33691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:01.992956   33691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:01.993085   33691 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:17:01.995833   33691 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:17:01.996229   33691 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:17:01.996251   33691 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:17:01.996330   33691 host.go:66] Checking if "ha-227346-m03" exists ...
	I0819 17:17:01.996671   33691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:01.996707   33691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:02.011445   33691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0819 17:17:02.011916   33691 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:02.012407   33691 main.go:141] libmachine: Using API Version  1
	I0819 17:17:02.012427   33691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:02.012712   33691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:02.012905   33691 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:17:02.013090   33691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:17:02.013112   33691 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:17:02.015839   33691 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:17:02.016252   33691 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:17:02.016279   33691 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:17:02.016406   33691 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:17:02.016592   33691 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:17:02.016735   33691 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:17:02.016910   33691 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:17:02.095625   33691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:17:02.116555   33691 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:17:02.116587   33691 api_server.go:166] Checking apiserver status ...
	I0819 17:17:02.116620   33691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:17:02.129067   33691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	W0819 17:17:02.138730   33691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:17:02.138776   33691 ssh_runner.go:195] Run: ls
	I0819 17:17:02.142570   33691 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:17:02.146869   33691 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:17:02.146896   33691 status.go:422] ha-227346-m03 apiserver status = Running (err=<nil>)
	I0819 17:17:02.146907   33691 status.go:257] ha-227346-m03 status: &{Name:ha-227346-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:17:02.146925   33691 status.go:255] checking status of ha-227346-m04 ...
	I0819 17:17:02.147313   33691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:02.147361   33691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:02.161950   33691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0819 17:17:02.162393   33691 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:02.162846   33691 main.go:141] libmachine: Using API Version  1
	I0819 17:17:02.162867   33691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:02.163127   33691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:02.163360   33691 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:17:02.164858   33691 status.go:330] ha-227346-m04 host status = "Running" (err=<nil>)
	I0819 17:17:02.164875   33691 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:17:02.165190   33691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:02.165234   33691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:02.180412   33691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I0819 17:17:02.180832   33691 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:02.181220   33691 main.go:141] libmachine: Using API Version  1
	I0819 17:17:02.181243   33691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:02.181557   33691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:02.181721   33691 main.go:141] libmachine: (ha-227346-m04) Calling .GetIP
	I0819 17:17:02.184581   33691 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:17:02.185014   33691 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:17:02.185046   33691 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:17:02.185216   33691 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:17:02.185501   33691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:02.185532   33691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:02.199569   33691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0819 17:17:02.199994   33691 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:02.200486   33691 main.go:141] libmachine: Using API Version  1
	I0819 17:17:02.200505   33691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:02.200808   33691 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:02.200969   33691 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:17:02.201152   33691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:17:02.201171   33691 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:17:02.203984   33691 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:17:02.204377   33691 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:17:02.204394   33691 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:17:02.204511   33691 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:17:02.204667   33691 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:17:02.204832   33691 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:17:02.204971   33691 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:17:02.279441   33691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:17:02.292566   33691 status.go:257] ha-227346-m04 status: &{Name:ha-227346-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-227346 -n ha-227346
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-227346 logs -n 25: (1.301685295s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346:/home/docker/cp-test_ha-227346-m03_ha-227346.txt                      |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346 sudo cat                                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m03_ha-227346.txt                                |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m02:/home/docker/cp-test_ha-227346-m03_ha-227346-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m02 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m03_ha-227346-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04:/home/docker/cp-test_ha-227346-m03_ha-227346-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m04 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m03_ha-227346-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp testdata/cp-test.txt                                               | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile955442382/001/cp-test_ha-227346-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346:/home/docker/cp-test_ha-227346-m04_ha-227346.txt                      |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346 sudo cat                                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346.txt                                |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m02:/home/docker/cp-test_ha-227346-m04_ha-227346-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m02 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03:/home/docker/cp-test_ha-227346-m04_ha-227346-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m03 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-227346 node stop m02 -v=7                                                    | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-227346 node start m02 -v=7                                                   | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:16 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:09:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:09:04.036568   28158 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:09:04.036858   28158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:09:04.036870   28158 out.go:358] Setting ErrFile to fd 2...
	I0819 17:09:04.036875   28158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:09:04.037049   28158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:09:04.037651   28158 out.go:352] Setting JSON to false
	I0819 17:09:04.038490   28158 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3089,"bootTime":1724084255,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:09:04.038542   28158 start.go:139] virtualization: kvm guest
	I0819 17:09:04.040721   28158 out.go:177] * [ha-227346] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:09:04.042005   28158 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:09:04.042023   28158 notify.go:220] Checking for updates...
	I0819 17:09:04.044532   28158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:09:04.045856   28158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:09:04.046961   28158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:09:04.048020   28158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:09:04.049070   28158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:09:04.050387   28158 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:09:04.083918   28158 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 17:09:04.085051   28158 start.go:297] selected driver: kvm2
	I0819 17:09:04.085070   28158 start.go:901] validating driver "kvm2" against <nil>
	I0819 17:09:04.085083   28158 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:09:04.086023   28158 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:09:04.086110   28158 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:09:04.100306   28158 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:09:04.100353   28158 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:09:04.100592   28158 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:09:04.100668   28158 cni.go:84] Creating CNI manager for ""
	I0819 17:09:04.100683   28158 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 17:09:04.100690   28158 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:09:04.100777   28158 start.go:340] cluster config:
	{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0819 17:09:04.100905   28158 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:09:04.102500   28158 out.go:177] * Starting "ha-227346" primary control-plane node in "ha-227346" cluster
	I0819 17:09:04.103613   28158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:09:04.103644   28158 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:09:04.103657   28158 cache.go:56] Caching tarball of preloaded images
	I0819 17:09:04.103727   28158 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:09:04.103738   28158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:09:04.104024   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:09:04.104055   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json: {Name:mk6e7d11c4e5aa09a7b1c55a1b184f3bbbc1bb77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:04.104199   28158 start.go:360] acquireMachinesLock for ha-227346: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:09:04.104247   28158 start.go:364] duration metric: took 24.55µs to acquireMachinesLock for "ha-227346"
	I0819 17:09:04.104270   28158 start.go:93] Provisioning new machine with config: &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:09:04.104337   28158 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 17:09:04.106016   28158 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 17:09:04.106149   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:04.106190   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:04.119554   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0819 17:09:04.119969   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:04.120492   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:04.120511   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:04.120808   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:04.121001   28158 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:09:04.121170   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:04.121311   28158 start.go:159] libmachine.API.Create for "ha-227346" (driver="kvm2")
	I0819 17:09:04.121338   28158 client.go:168] LocalClient.Create starting
	I0819 17:09:04.121368   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 17:09:04.121405   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:09:04.121434   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:09:04.121516   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 17:09:04.121542   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:09:04.121560   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:09:04.121596   28158 main.go:141] libmachine: Running pre-create checks...
	I0819 17:09:04.121614   28158 main.go:141] libmachine: (ha-227346) Calling .PreCreateCheck
	I0819 17:09:04.121929   28158 main.go:141] libmachine: (ha-227346) Calling .GetConfigRaw
	I0819 17:09:04.122249   28158 main.go:141] libmachine: Creating machine...
	I0819 17:09:04.122265   28158 main.go:141] libmachine: (ha-227346) Calling .Create
	I0819 17:09:04.122402   28158 main.go:141] libmachine: (ha-227346) Creating KVM machine...
	I0819 17:09:04.123482   28158 main.go:141] libmachine: (ha-227346) DBG | found existing default KVM network
	I0819 17:09:04.124096   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:04.123959   28181 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012b980}
	I0819 17:09:04.124120   28158 main.go:141] libmachine: (ha-227346) DBG | created network xml: 
	I0819 17:09:04.124132   28158 main.go:141] libmachine: (ha-227346) DBG | <network>
	I0819 17:09:04.124143   28158 main.go:141] libmachine: (ha-227346) DBG |   <name>mk-ha-227346</name>
	I0819 17:09:04.124151   28158 main.go:141] libmachine: (ha-227346) DBG |   <dns enable='no'/>
	I0819 17:09:04.124161   28158 main.go:141] libmachine: (ha-227346) DBG |   
	I0819 17:09:04.124171   28158 main.go:141] libmachine: (ha-227346) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 17:09:04.124180   28158 main.go:141] libmachine: (ha-227346) DBG |     <dhcp>
	I0819 17:09:04.124189   28158 main.go:141] libmachine: (ha-227346) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 17:09:04.124203   28158 main.go:141] libmachine: (ha-227346) DBG |     </dhcp>
	I0819 17:09:04.124215   28158 main.go:141] libmachine: (ha-227346) DBG |   </ip>
	I0819 17:09:04.124223   28158 main.go:141] libmachine: (ha-227346) DBG |   
	I0819 17:09:04.124231   28158 main.go:141] libmachine: (ha-227346) DBG | </network>
	I0819 17:09:04.124239   28158 main.go:141] libmachine: (ha-227346) DBG | 
	I0819 17:09:04.128999   28158 main.go:141] libmachine: (ha-227346) DBG | trying to create private KVM network mk-ha-227346 192.168.39.0/24...
	I0819 17:09:04.190799   28158 main.go:141] libmachine: (ha-227346) DBG | private KVM network mk-ha-227346 192.168.39.0/24 created
	I0819 17:09:04.190824   28158 main.go:141] libmachine: (ha-227346) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346 ...
	I0819 17:09:04.190834   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:04.190773   28181 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:09:04.190852   28158 main.go:141] libmachine: (ha-227346) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 17:09:04.190939   28158 main.go:141] libmachine: (ha-227346) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 17:09:04.471387   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:04.471287   28181 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa...
	I0819 17:09:05.097746   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:05.097640   28181 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/ha-227346.rawdisk...
	I0819 17:09:05.097791   28158 main.go:141] libmachine: (ha-227346) DBG | Writing magic tar header
	I0819 17:09:05.097802   28158 main.go:141] libmachine: (ha-227346) DBG | Writing SSH key tar header
	I0819 17:09:05.097810   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:05.097746   28181 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346 ...
	I0819 17:09:05.097830   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346
	I0819 17:09:05.097910   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346 (perms=drwx------)
	I0819 17:09:05.097940   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 17:09:05.097959   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 17:09:05.097970   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 17:09:05.097981   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 17:09:05.097991   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 17:09:05.098004   28158 main.go:141] libmachine: (ha-227346) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 17:09:05.098017   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:09:05.098041   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 17:09:05.098059   28158 main.go:141] libmachine: (ha-227346) Creating domain...
	I0819 17:09:05.098071   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 17:09:05.098086   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home/jenkins
	I0819 17:09:05.098096   28158 main.go:141] libmachine: (ha-227346) DBG | Checking permissions on dir: /home
	I0819 17:09:05.098106   28158 main.go:141] libmachine: (ha-227346) DBG | Skipping /home - not owner
	I0819 17:09:05.099219   28158 main.go:141] libmachine: (ha-227346) define libvirt domain using xml: 
	I0819 17:09:05.099244   28158 main.go:141] libmachine: (ha-227346) <domain type='kvm'>
	I0819 17:09:05.099253   28158 main.go:141] libmachine: (ha-227346)   <name>ha-227346</name>
	I0819 17:09:05.099257   28158 main.go:141] libmachine: (ha-227346)   <memory unit='MiB'>2200</memory>
	I0819 17:09:05.099262   28158 main.go:141] libmachine: (ha-227346)   <vcpu>2</vcpu>
	I0819 17:09:05.099267   28158 main.go:141] libmachine: (ha-227346)   <features>
	I0819 17:09:05.099282   28158 main.go:141] libmachine: (ha-227346)     <acpi/>
	I0819 17:09:05.099311   28158 main.go:141] libmachine: (ha-227346)     <apic/>
	I0819 17:09:05.099322   28158 main.go:141] libmachine: (ha-227346)     <pae/>
	I0819 17:09:05.099334   28158 main.go:141] libmachine: (ha-227346)     
	I0819 17:09:05.099344   28158 main.go:141] libmachine: (ha-227346)   </features>
	I0819 17:09:05.099355   28158 main.go:141] libmachine: (ha-227346)   <cpu mode='host-passthrough'>
	I0819 17:09:05.099365   28158 main.go:141] libmachine: (ha-227346)   
	I0819 17:09:05.099370   28158 main.go:141] libmachine: (ha-227346)   </cpu>
	I0819 17:09:05.099377   28158 main.go:141] libmachine: (ha-227346)   <os>
	I0819 17:09:05.099381   28158 main.go:141] libmachine: (ha-227346)     <type>hvm</type>
	I0819 17:09:05.099387   28158 main.go:141] libmachine: (ha-227346)     <boot dev='cdrom'/>
	I0819 17:09:05.099395   28158 main.go:141] libmachine: (ha-227346)     <boot dev='hd'/>
	I0819 17:09:05.099405   28158 main.go:141] libmachine: (ha-227346)     <bootmenu enable='no'/>
	I0819 17:09:05.099415   28158 main.go:141] libmachine: (ha-227346)   </os>
	I0819 17:09:05.099428   28158 main.go:141] libmachine: (ha-227346)   <devices>
	I0819 17:09:05.099436   28158 main.go:141] libmachine: (ha-227346)     <disk type='file' device='cdrom'>
	I0819 17:09:05.099451   28158 main.go:141] libmachine: (ha-227346)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/boot2docker.iso'/>
	I0819 17:09:05.099462   28158 main.go:141] libmachine: (ha-227346)       <target dev='hdc' bus='scsi'/>
	I0819 17:09:05.099472   28158 main.go:141] libmachine: (ha-227346)       <readonly/>
	I0819 17:09:05.099476   28158 main.go:141] libmachine: (ha-227346)     </disk>
	I0819 17:09:05.099482   28158 main.go:141] libmachine: (ha-227346)     <disk type='file' device='disk'>
	I0819 17:09:05.099495   28158 main.go:141] libmachine: (ha-227346)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 17:09:05.099511   28158 main.go:141] libmachine: (ha-227346)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/ha-227346.rawdisk'/>
	I0819 17:09:05.099522   28158 main.go:141] libmachine: (ha-227346)       <target dev='hda' bus='virtio'/>
	I0819 17:09:05.099530   28158 main.go:141] libmachine: (ha-227346)     </disk>
	I0819 17:09:05.099541   28158 main.go:141] libmachine: (ha-227346)     <interface type='network'>
	I0819 17:09:05.099550   28158 main.go:141] libmachine: (ha-227346)       <source network='mk-ha-227346'/>
	I0819 17:09:05.099560   28158 main.go:141] libmachine: (ha-227346)       <model type='virtio'/>
	I0819 17:09:05.099584   28158 main.go:141] libmachine: (ha-227346)     </interface>
	I0819 17:09:05.099611   28158 main.go:141] libmachine: (ha-227346)     <interface type='network'>
	I0819 17:09:05.099624   28158 main.go:141] libmachine: (ha-227346)       <source network='default'/>
	I0819 17:09:05.099637   28158 main.go:141] libmachine: (ha-227346)       <model type='virtio'/>
	I0819 17:09:05.099648   28158 main.go:141] libmachine: (ha-227346)     </interface>
	I0819 17:09:05.099657   28158 main.go:141] libmachine: (ha-227346)     <serial type='pty'>
	I0819 17:09:05.099663   28158 main.go:141] libmachine: (ha-227346)       <target port='0'/>
	I0819 17:09:05.099671   28158 main.go:141] libmachine: (ha-227346)     </serial>
	I0819 17:09:05.099682   28158 main.go:141] libmachine: (ha-227346)     <console type='pty'>
	I0819 17:09:05.099699   28158 main.go:141] libmachine: (ha-227346)       <target type='serial' port='0'/>
	I0819 17:09:05.099714   28158 main.go:141] libmachine: (ha-227346)     </console>
	I0819 17:09:05.099726   28158 main.go:141] libmachine: (ha-227346)     <rng model='virtio'>
	I0819 17:09:05.099752   28158 main.go:141] libmachine: (ha-227346)       <backend model='random'>/dev/random</backend>
	I0819 17:09:05.099764   28158 main.go:141] libmachine: (ha-227346)     </rng>
	I0819 17:09:05.099786   28158 main.go:141] libmachine: (ha-227346)     
	I0819 17:09:05.099807   28158 main.go:141] libmachine: (ha-227346)     
	I0819 17:09:05.099821   28158 main.go:141] libmachine: (ha-227346)   </devices>
	I0819 17:09:05.099832   28158 main.go:141] libmachine: (ha-227346) </domain>
	I0819 17:09:05.099846   28158 main.go:141] libmachine: (ha-227346) 
	I0819 17:09:05.104727   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:75:31:56 in network default
	I0819 17:09:05.105291   28158 main.go:141] libmachine: (ha-227346) Ensuring networks are active...
	I0819 17:09:05.105306   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:05.106054   28158 main.go:141] libmachine: (ha-227346) Ensuring network default is active
	I0819 17:09:05.106404   28158 main.go:141] libmachine: (ha-227346) Ensuring network mk-ha-227346 is active
	I0819 17:09:05.106945   28158 main.go:141] libmachine: (ha-227346) Getting domain xml...
	I0819 17:09:05.107806   28158 main.go:141] libmachine: (ha-227346) Creating domain...
	I0819 17:09:06.292856   28158 main.go:141] libmachine: (ha-227346) Waiting to get IP...
	I0819 17:09:06.293520   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:06.293882   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:06.293921   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:06.293861   28181 retry.go:31] will retry after 227.629159ms: waiting for machine to come up
	I0819 17:09:06.523593   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:06.524114   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:06.524150   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:06.524075   28181 retry.go:31] will retry after 292.133348ms: waiting for machine to come up
	I0819 17:09:06.817457   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:06.817907   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:06.817934   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:06.817873   28181 retry.go:31] will retry after 467.412101ms: waiting for machine to come up
	I0819 17:09:07.286543   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:07.287005   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:07.287030   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:07.286964   28181 retry.go:31] will retry after 421.9896ms: waiting for machine to come up
	I0819 17:09:07.710440   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:07.710830   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:07.710878   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:07.710805   28181 retry.go:31] will retry after 531.369228ms: waiting for machine to come up
	I0819 17:09:08.243409   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:08.243763   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:08.243792   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:08.243725   28181 retry.go:31] will retry after 699.187629ms: waiting for machine to come up
	I0819 17:09:08.944004   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:08.944382   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:08.944414   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:08.944337   28181 retry.go:31] will retry after 867.603094ms: waiting for machine to come up
	I0819 17:09:09.813897   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:09.814274   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:09.814302   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:09.814254   28181 retry.go:31] will retry after 1.027123124s: waiting for machine to come up
	I0819 17:09:10.843615   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:10.844095   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:10.844112   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:10.844055   28181 retry.go:31] will retry after 1.833742027s: waiting for machine to come up
	I0819 17:09:12.678985   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:12.679365   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:12.679393   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:12.679325   28181 retry.go:31] will retry after 1.648162625s: waiting for machine to come up
	I0819 17:09:14.329269   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:14.329767   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:14.329793   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:14.329733   28181 retry.go:31] will retry after 2.105332646s: waiting for machine to come up
	I0819 17:09:16.437905   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:16.438313   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:16.438338   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:16.438267   28181 retry.go:31] will retry after 3.409284945s: waiting for machine to come up
	I0819 17:09:19.849512   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:19.849804   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find current IP address of domain ha-227346 in network mk-ha-227346
	I0819 17:09:19.849826   28158 main.go:141] libmachine: (ha-227346) DBG | I0819 17:09:19.849765   28181 retry.go:31] will retry after 3.80335016s: waiting for machine to come up
	I0819 17:09:23.657777   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.658164   28158 main.go:141] libmachine: (ha-227346) Found IP for machine: 192.168.39.205
	I0819 17:09:23.658186   28158 main.go:141] libmachine: (ha-227346) Reserving static IP address...
	I0819 17:09:23.658199   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has current primary IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.658540   28158 main.go:141] libmachine: (ha-227346) DBG | unable to find host DHCP lease matching {name: "ha-227346", mac: "52:54:00:ba:14:7f", ip: "192.168.39.205"} in network mk-ha-227346
	I0819 17:09:23.729579   28158 main.go:141] libmachine: (ha-227346) DBG | Getting to WaitForSSH function...
	I0819 17:09:23.729609   28158 main.go:141] libmachine: (ha-227346) Reserved static IP address: 192.168.39.205
	I0819 17:09:23.729651   28158 main.go:141] libmachine: (ha-227346) Waiting for SSH to be available...
	I0819 17:09:23.731831   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.732172   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:23.732200   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.732324   28158 main.go:141] libmachine: (ha-227346) DBG | Using SSH client type: external
	I0819 17:09:23.732353   28158 main.go:141] libmachine: (ha-227346) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa (-rw-------)
	I0819 17:09:23.732379   28158 main.go:141] libmachine: (ha-227346) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:09:23.732406   28158 main.go:141] libmachine: (ha-227346) DBG | About to run SSH command:
	I0819 17:09:23.732420   28158 main.go:141] libmachine: (ha-227346) DBG | exit 0
	I0819 17:09:23.852556   28158 main.go:141] libmachine: (ha-227346) DBG | SSH cmd err, output: <nil>: 
	I0819 17:09:23.852900   28158 main.go:141] libmachine: (ha-227346) KVM machine creation complete!
	I0819 17:09:23.853275   28158 main.go:141] libmachine: (ha-227346) Calling .GetConfigRaw
	I0819 17:09:23.853865   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:23.854050   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:23.854227   28158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 17:09:23.854240   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:09:23.855460   28158 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 17:09:23.855476   28158 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 17:09:23.855484   28158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 17:09:23.855492   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:23.857441   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.857748   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:23.857778   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.857907   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:23.858060   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:23.858268   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:23.858413   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:23.858615   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:23.858823   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:23.858837   28158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 17:09:23.959980   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:09:23.960000   28158 main.go:141] libmachine: Detecting the provisioner...
	I0819 17:09:23.960008   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:23.962895   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.963242   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:23.963279   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:23.963526   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:23.963762   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:23.963985   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:23.964133   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:23.964334   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:23.964506   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:23.964517   28158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 17:09:24.064978   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 17:09:24.065054   28158 main.go:141] libmachine: found compatible host: buildroot
	I0819 17:09:24.065064   28158 main.go:141] libmachine: Provisioning with buildroot...
	I0819 17:09:24.065071   28158 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:09:24.065314   28158 buildroot.go:166] provisioning hostname "ha-227346"
	I0819 17:09:24.065344   28158 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:09:24.065521   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.068050   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.068401   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.068434   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.068541   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:24.068712   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.068858   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.068991   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:24.069147   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:24.069424   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:24.069445   28158 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-227346 && echo "ha-227346" | sudo tee /etc/hostname
	I0819 17:09:24.186415   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-227346
	
	I0819 17:09:24.186461   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.189142   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.189471   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.189500   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.189705   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:24.189888   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.190038   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.190264   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:24.190470   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:24.190676   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:24.190692   28158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-227346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-227346/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-227346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:09:24.300639   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:09:24.300668   28158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:09:24.300715   28158 buildroot.go:174] setting up certificates
	I0819 17:09:24.300727   28158 provision.go:84] configureAuth start
	I0819 17:09:24.300739   28158 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:09:24.301042   28158 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:09:24.303526   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.303973   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.304001   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.304100   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.306151   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.306474   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.306512   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.306634   28158 provision.go:143] copyHostCerts
	I0819 17:09:24.306667   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:09:24.306711   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:09:24.306721   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:09:24.306817   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:09:24.306937   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:09:24.306966   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:09:24.306977   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:09:24.307110   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:09:24.307251   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:09:24.307281   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:09:24.307290   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:09:24.307343   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:09:24.307426   28158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.ha-227346 san=[127.0.0.1 192.168.39.205 ha-227346 localhost minikube]
	I0819 17:09:24.552566   28158 provision.go:177] copyRemoteCerts
	I0819 17:09:24.552628   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:09:24.552653   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.555270   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.555563   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.555587   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.555810   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:24.556008   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.556156   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:24.556273   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:24.638518   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 17:09:24.638585   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:09:24.660635   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 17:09:24.660695   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 17:09:24.681651   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 17:09:24.681720   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 17:09:24.702480   28158 provision.go:87] duration metric: took 401.737805ms to configureAuth
	I0819 17:09:24.702522   28158 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:09:24.702692   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:09:24.702774   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.705652   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.705986   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.706010   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.706188   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:24.706389   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.706517   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.706624   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:24.706739   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:24.706894   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:24.706909   28158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:09:24.957848   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:09:24.957879   28158 main.go:141] libmachine: Checking connection to Docker...
	I0819 17:09:24.957891   28158 main.go:141] libmachine: (ha-227346) Calling .GetURL
	I0819 17:09:24.959356   28158 main.go:141] libmachine: (ha-227346) DBG | Using libvirt version 6000000
	I0819 17:09:24.962223   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.962592   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.962632   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.962771   28158 main.go:141] libmachine: Docker is up and running!
	I0819 17:09:24.962787   28158 main.go:141] libmachine: Reticulating splines...
	I0819 17:09:24.962795   28158 client.go:171] duration metric: took 20.841449041s to LocalClient.Create
	I0819 17:09:24.962820   28158 start.go:167] duration metric: took 20.84150978s to libmachine.API.Create "ha-227346"
	I0819 17:09:24.962830   28158 start.go:293] postStartSetup for "ha-227346" (driver="kvm2")
	I0819 17:09:24.962840   28158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:09:24.962856   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:24.963099   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:09:24.963127   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:24.965414   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.965734   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:24.965759   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:24.965899   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:24.966066   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:24.966221   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:24.966357   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:25.046570   28158 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:09:25.050669   28158 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:09:25.050691   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:09:25.050750   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:09:25.050817   28158 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:09:25.050826   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /etc/ssl/certs/178372.pem
	I0819 17:09:25.050910   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:09:25.060113   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:09:25.085218   28158 start.go:296] duration metric: took 122.376609ms for postStartSetup
	I0819 17:09:25.085264   28158 main.go:141] libmachine: (ha-227346) Calling .GetConfigRaw
	I0819 17:09:25.085816   28158 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:09:25.088323   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.088814   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:25.088839   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.089092   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:09:25.089262   28158 start.go:128] duration metric: took 20.984914626s to createHost
	I0819 17:09:25.089283   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:25.091507   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.091809   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:25.091835   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.091982   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:25.092163   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:25.092315   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:25.092444   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:25.092595   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:09:25.092816   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:09:25.092829   28158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:09:25.197214   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724087365.170604273
	
	I0819 17:09:25.197239   28158 fix.go:216] guest clock: 1724087365.170604273
	I0819 17:09:25.197251   28158 fix.go:229] Guest: 2024-08-19 17:09:25.170604273 +0000 UTC Remote: 2024-08-19 17:09:25.089273109 +0000 UTC m=+21.086006962 (delta=81.331164ms)
	I0819 17:09:25.197275   28158 fix.go:200] guest clock delta is within tolerance: 81.331164ms
	I0819 17:09:25.197281   28158 start.go:83] releasing machines lock for "ha-227346", held for 21.09302376s
	I0819 17:09:25.197302   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:25.197582   28158 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:09:25.199941   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.200256   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:25.200280   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.200448   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:25.200927   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:25.201087   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:25.201180   28158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:09:25.201220   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:25.201266   28158 ssh_runner.go:195] Run: cat /version.json
	I0819 17:09:25.201287   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:25.203827   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.203865   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.204170   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:25.204196   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.204266   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:25.204293   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:25.204341   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:25.204512   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:25.204518   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:25.204677   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:25.204695   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:25.204777   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:25.204855   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:25.204894   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:25.311166   28158 ssh_runner.go:195] Run: systemctl --version
	I0819 17:09:25.316668   28158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:09:25.475482   28158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:09:25.480908   28158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:09:25.480978   28158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:09:25.495713   28158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 17:09:25.495735   28158 start.go:495] detecting cgroup driver to use...
	I0819 17:09:25.495796   28158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:09:25.510747   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:09:25.526102   28158 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:09:25.526171   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:09:25.539078   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:09:25.552018   28158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:09:25.657812   28158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:09:25.817472   28158 docker.go:233] disabling docker service ...
	I0819 17:09:25.817548   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:09:25.831346   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:09:25.843914   28158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:09:25.976957   28158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:09:26.104356   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:09:26.117589   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:09:26.135726   28158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:09:26.135792   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.145784   28158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:09:26.145853   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.155900   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.165633   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.175589   28158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:09:26.185415   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.194740   28158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.210069   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:09:26.219701   28158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:09:26.228426   28158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 17:09:26.228485   28158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 17:09:26.240858   28158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:09:26.249419   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:09:26.365160   28158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:09:26.488729   28158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:09:26.488808   28158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:09:26.493106   28158 start.go:563] Will wait 60s for crictl version
	I0819 17:09:26.493164   28158 ssh_runner.go:195] Run: which crictl
	I0819 17:09:26.496562   28158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:09:26.535776   28158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:09:26.535866   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:09:26.561181   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:09:26.588190   28158 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:09:26.589563   28158 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:09:26.592126   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:26.592484   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:26.592512   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:26.592732   28158 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:09:26.596466   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:09:26.608306   28158 kubeadm.go:883] updating cluster {Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:09:26.608412   28158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:09:26.608482   28158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:09:26.638050   28158 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 17:09:26.638114   28158 ssh_runner.go:195] Run: which lz4
	I0819 17:09:26.641598   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0819 17:09:26.641681   28158 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 17:09:26.645389   28158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 17:09:26.645417   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 17:09:27.760700   28158 crio.go:462] duration metric: took 1.119047314s to copy over tarball
	I0819 17:09:27.760788   28158 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 17:09:29.722576   28158 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.96176146s)
	I0819 17:09:29.722601   28158 crio.go:469] duration metric: took 1.961880124s to extract the tarball
	I0819 17:09:29.722609   28158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 17:09:29.758382   28158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:09:29.802261   28158 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:09:29.802284   28158 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:09:29.802293   28158 kubeadm.go:934] updating node { 192.168.39.205 8443 v1.31.0 crio true true} ...
	I0819 17:09:29.802409   28158 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-227346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:09:29.802484   28158 ssh_runner.go:195] Run: crio config
	I0819 17:09:29.844677   28158 cni.go:84] Creating CNI manager for ""
	I0819 17:09:29.844694   28158 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 17:09:29.844709   28158 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:09:29.844731   28158 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-227346 NodeName:ha-227346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:09:29.844894   28158 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-227346"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:09:29.844917   28158 kube-vip.go:115] generating kube-vip config ...
	I0819 17:09:29.844965   28158 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 17:09:29.861764   28158 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 17:09:29.861866   28158 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0819 17:09:29.861916   28158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:09:29.870992   28158 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:09:29.871059   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 17:09:29.879608   28158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 17:09:29.894571   28158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:09:29.909206   28158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 17:09:29.924181   28158 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0819 17:09:29.938999   28158 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 17:09:29.942670   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:09:29.953367   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:09:30.069400   28158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:09:30.085973   28158 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346 for IP: 192.168.39.205
	I0819 17:09:30.085999   28158 certs.go:194] generating shared ca certs ...
	I0819 17:09:30.086015   28158 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.086198   28158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:09:30.086254   28158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:09:30.086268   28158 certs.go:256] generating profile certs ...
	I0819 17:09:30.086342   28158 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key
	I0819 17:09:30.086359   28158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt with IP's: []
	I0819 17:09:30.173064   28158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt ...
	I0819 17:09:30.173092   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt: {Name:mk591f421539a106f08e5c1d174e11dc33c0a5bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.173272   28158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key ...
	I0819 17:09:30.173285   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key: {Name:mkd462373711801288a4ce7966c2b6d712194477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.173388   28158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.f854639b
	I0819 17:09:30.173404   28158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.f854639b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205 192.168.39.254]
	I0819 17:09:30.233812   28158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.f854639b ...
	I0819 17:09:30.233839   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.f854639b: {Name:mkb651d7d4607b62d21d16ba15b130759f43fa27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.233994   28158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.f854639b ...
	I0819 17:09:30.234006   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.f854639b: {Name:mk93f62ffdd65b89624f041e2ccf7fba11f0a010 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.234095   28158 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.f854639b -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt
	I0819 17:09:30.234174   28158 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.f854639b -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key
	I0819 17:09:30.234227   28158 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key
	I0819 17:09:30.234242   28158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt with IP's: []
	I0819 17:09:30.300093   28158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt ...
	I0819 17:09:30.300121   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt: {Name:mk537dacc775b012dc5337f6a018fbc6b28b2cc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.300264   28158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key ...
	I0819 17:09:30.300281   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key: {Name:mk21b5ccc1585a537d1750c1265bac520761ee51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:30.300347   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 17:09:30.300364   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 17:09:30.300379   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 17:09:30.300392   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 17:09:30.300402   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 17:09:30.300414   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 17:09:30.300423   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 17:09:30.300435   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 17:09:30.300480   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:09:30.300511   28158 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:09:30.300520   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:09:30.300540   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:09:30.300565   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:09:30.300588   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:09:30.300636   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:09:30.300661   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem -> /usr/share/ca-certificates/17837.pem
	I0819 17:09:30.300673   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /usr/share/ca-certificates/178372.pem
	I0819 17:09:30.300686   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:09:30.301212   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:09:30.325076   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:09:30.347257   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:09:30.368721   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:09:30.389960   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 17:09:30.411449   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:09:30.433372   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:09:30.455374   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:09:30.476292   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:09:30.497100   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:09:30.517682   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:09:30.538854   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:09:30.554208   28158 ssh_runner.go:195] Run: openssl version
	I0819 17:09:30.559810   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:09:30.569973   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:09:30.573945   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:09:30.574001   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:09:30.579263   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:09:30.589001   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:09:30.598822   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:09:30.602904   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:09:30.602967   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:09:30.608088   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:09:30.617950   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:09:30.628103   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:09:30.632174   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:09:30.632224   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:09:30.637537   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:09:30.647799   28158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:09:30.651723   28158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:09:30.651778   28158 kubeadm.go:392] StartCluster: {Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:09:30.651861   28158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:09:30.651913   28158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:09:30.687533   28158 cri.go:89] found id: ""
	I0819 17:09:30.687611   28158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 17:09:30.697284   28158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 17:09:30.706179   28158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:09:30.714803   28158 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:09:30.714820   28158 kubeadm.go:157] found existing configuration files:
	
	I0819 17:09:30.714863   28158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:09:30.723039   28158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:09:30.723084   28158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:09:30.731744   28158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:09:30.740029   28158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:09:30.740091   28158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:09:30.748716   28158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:09:30.756943   28158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:09:30.756995   28158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:09:30.765443   28158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:09:30.773460   28158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:09:30.773504   28158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:09:30.781847   28158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 17:09:30.872907   28158 kubeadm.go:310] W0819 17:09:30.853430     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:09:30.873644   28158 kubeadm.go:310] W0819 17:09:30.854326     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:09:31.001162   28158 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 17:09:41.405511   28158 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 17:09:41.405574   28158 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:09:41.405680   28158 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:09:41.405898   28158 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:09:41.405990   28158 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 17:09:41.406059   28158 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:09:41.407930   28158 out.go:235]   - Generating certificates and keys ...
	I0819 17:09:41.407996   28158 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:09:41.408111   28158 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:09:41.408229   28158 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 17:09:41.408308   28158 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 17:09:41.408400   28158 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 17:09:41.408477   28158 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 17:09:41.408541   28158 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 17:09:41.408646   28158 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-227346 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0819 17:09:41.408693   28158 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 17:09:41.408807   28158 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-227346 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0819 17:09:41.408862   28158 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 17:09:41.408942   28158 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 17:09:41.409012   28158 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 17:09:41.409100   28158 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:09:41.409171   28158 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:09:41.409249   28158 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 17:09:41.409329   28158 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:09:41.409415   28158 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:09:41.409486   28158 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:09:41.409602   28158 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:09:41.409677   28158 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:09:41.412032   28158 out.go:235]   - Booting up control plane ...
	I0819 17:09:41.412113   28158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:09:41.412175   28158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:09:41.412229   28158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:09:41.412317   28158 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:09:41.412396   28158 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:09:41.412430   28158 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:09:41.412561   28158 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 17:09:41.412670   28158 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 17:09:41.412720   28158 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001230439s
	I0819 17:09:41.412841   28158 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 17:09:41.412940   28158 kubeadm.go:310] [api-check] The API server is healthy after 5.675453497s
	I0819 17:09:41.413064   28158 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 17:09:41.413202   28158 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 17:09:41.413254   28158 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 17:09:41.413406   28158 kubeadm.go:310] [mark-control-plane] Marking the node ha-227346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 17:09:41.413452   28158 kubeadm.go:310] [bootstrap-token] Using token: bnwy1v.t48ncxxc2fkxdt25
	I0819 17:09:41.414871   28158 out.go:235]   - Configuring RBAC rules ...
	I0819 17:09:41.414952   28158 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 17:09:41.415053   28158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 17:09:41.415232   28158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 17:09:41.415361   28158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 17:09:41.415460   28158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 17:09:41.415555   28158 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 17:09:41.415688   28158 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 17:09:41.415754   28158 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 17:09:41.415827   28158 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 17:09:41.415836   28158 kubeadm.go:310] 
	I0819 17:09:41.415916   28158 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 17:09:41.415924   28158 kubeadm.go:310] 
	I0819 17:09:41.416025   28158 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 17:09:41.416034   28158 kubeadm.go:310] 
	I0819 17:09:41.416068   28158 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 17:09:41.416147   28158 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 17:09:41.416210   28158 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 17:09:41.416219   28158 kubeadm.go:310] 
	I0819 17:09:41.416262   28158 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 17:09:41.416268   28158 kubeadm.go:310] 
	I0819 17:09:41.416310   28158 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 17:09:41.416314   28158 kubeadm.go:310] 
	I0819 17:09:41.416359   28158 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 17:09:41.416429   28158 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 17:09:41.416486   28158 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 17:09:41.416495   28158 kubeadm.go:310] 
	I0819 17:09:41.416567   28158 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 17:09:41.416639   28158 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 17:09:41.416645   28158 kubeadm.go:310] 
	I0819 17:09:41.416772   28158 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bnwy1v.t48ncxxc2fkxdt25 \
	I0819 17:09:41.416861   28158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 17:09:41.416882   28158 kubeadm.go:310] 	--control-plane 
	I0819 17:09:41.416887   28158 kubeadm.go:310] 
	I0819 17:09:41.416986   28158 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 17:09:41.417003   28158 kubeadm.go:310] 
	I0819 17:09:41.417120   28158 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bnwy1v.t48ncxxc2fkxdt25 \
	I0819 17:09:41.417278   28158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 17:09:41.417290   28158 cni.go:84] Creating CNI manager for ""
	I0819 17:09:41.417296   28158 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 17:09:41.418782   28158 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 17:09:41.419956   28158 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 17:09:41.426412   28158 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 17:09:41.426433   28158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 17:09:41.447079   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 17:09:41.821869   28158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 17:09:41.821949   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:41.821963   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-227346 minikube.k8s.io/updated_at=2024_08_19T17_09_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-227346 minikube.k8s.io/primary=true
	I0819 17:09:41.865646   28158 ops.go:34] apiserver oom_adj: -16
	I0819 17:09:41.986516   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:42.487361   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:42.987265   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:43.487112   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:43.987266   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:44.486579   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:44.987306   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:09:45.091004   28158 kubeadm.go:1113] duration metric: took 3.269118599s to wait for elevateKubeSystemPrivileges
	I0819 17:09:45.091040   28158 kubeadm.go:394] duration metric: took 14.439266352s to StartCluster
	I0819 17:09:45.091058   28158 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:45.091133   28158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:09:45.091898   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:09:45.092107   28158 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:09:45.092125   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 17:09:45.092141   28158 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 17:09:45.092181   28158 addons.go:69] Setting storage-provisioner=true in profile "ha-227346"
	I0819 17:09:45.092206   28158 addons.go:234] Setting addon storage-provisioner=true in "ha-227346"
	I0819 17:09:45.092131   28158 start.go:241] waiting for startup goroutines ...
	I0819 17:09:45.092228   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:09:45.092228   28158 addons.go:69] Setting default-storageclass=true in profile "ha-227346"
	I0819 17:09:45.092334   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:09:45.092369   28158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-227346"
	I0819 17:09:45.092693   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:45.092724   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:45.092852   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:45.092885   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:45.107380   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I0819 17:09:45.107694   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0819 17:09:45.107911   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:45.108031   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:45.108413   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:45.108431   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:45.108559   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:45.108583   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:45.108766   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:45.108898   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:45.109077   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:09:45.109260   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:45.109301   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:45.111477   28158 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:09:45.111842   28158 kapi.go:59] client config for ha-227346: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt", KeyFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key", CAFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 17:09:45.112339   28158 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 17:09:45.112653   28158 addons.go:234] Setting addon default-storageclass=true in "ha-227346"
	I0819 17:09:45.112694   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:09:45.113101   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:45.113144   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:45.124830   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I0819 17:09:45.125358   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:45.125968   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:45.125995   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:45.126325   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:45.126508   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:09:45.127326   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I0819 17:09:45.127754   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:45.128255   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:45.128278   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:45.128289   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:45.128611   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:45.129042   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:45.129071   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:45.130146   28158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:09:45.131435   28158 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:09:45.131458   28158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 17:09:45.131477   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:45.134424   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:45.134892   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:45.134931   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:45.135207   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:45.135404   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:45.135594   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:45.135764   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:45.144965   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0819 17:09:45.145478   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:45.145983   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:45.146005   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:45.146321   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:45.146515   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:09:45.148096   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:09:45.148319   28158 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 17:09:45.148334   28158 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 17:09:45.148351   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:09:45.151255   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:45.151672   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:09:45.151699   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:09:45.151846   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:09:45.151989   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:09:45.152110   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:09:45.152237   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:09:45.259248   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 17:09:45.270732   28158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:09:45.349207   28158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 17:09:45.826885   28158 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 17:09:46.136930   28158 main.go:141] libmachine: Making call to close driver server
	I0819 17:09:46.136956   28158 main.go:141] libmachine: (ha-227346) Calling .Close
	I0819 17:09:46.137002   28158 main.go:141] libmachine: Making call to close driver server
	I0819 17:09:46.137024   28158 main.go:141] libmachine: (ha-227346) Calling .Close
	I0819 17:09:46.137230   28158 main.go:141] libmachine: (ha-227346) DBG | Closing plugin on server side
	I0819 17:09:46.137270   28158 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:09:46.137283   28158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:09:46.137291   28158 main.go:141] libmachine: Making call to close driver server
	I0819 17:09:46.137290   28158 main.go:141] libmachine: (ha-227346) DBG | Closing plugin on server side
	I0819 17:09:46.137299   28158 main.go:141] libmachine: (ha-227346) Calling .Close
	I0819 17:09:46.137381   28158 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:09:46.137396   28158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:09:46.137409   28158 main.go:141] libmachine: Making call to close driver server
	I0819 17:09:46.137420   28158 main.go:141] libmachine: (ha-227346) Calling .Close
	I0819 17:09:46.137498   28158 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:09:46.137511   28158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:09:46.137561   28158 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 17:09:46.137580   28158 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 17:09:46.137669   28158 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0819 17:09:46.137680   28158 round_trippers.go:469] Request Headers:
	I0819 17:09:46.137690   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:09:46.137701   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:09:46.137769   28158 main.go:141] libmachine: (ha-227346) DBG | Closing plugin on server side
	I0819 17:09:46.137936   28158 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:09:46.137961   28158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:09:46.153124   28158 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0819 17:09:46.153701   28158 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0819 17:09:46.153715   28158 round_trippers.go:469] Request Headers:
	I0819 17:09:46.153722   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:09:46.153726   28158 round_trippers.go:473]     Content-Type: application/json
	I0819 17:09:46.153730   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:09:46.158567   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:09:46.158753   28158 main.go:141] libmachine: Making call to close driver server
	I0819 17:09:46.158768   28158 main.go:141] libmachine: (ha-227346) Calling .Close
	I0819 17:09:46.159003   28158 main.go:141] libmachine: Successfully made call to close driver server
	I0819 17:09:46.159021   28158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 17:09:46.159031   28158 main.go:141] libmachine: (ha-227346) DBG | Closing plugin on server side
	I0819 17:09:46.160867   28158 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 17:09:46.162108   28158 addons.go:510] duration metric: took 1.069969082s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 17:09:46.162137   28158 start.go:246] waiting for cluster config update ...
	I0819 17:09:46.162150   28158 start.go:255] writing updated cluster config ...
	I0819 17:09:46.163441   28158 out.go:201] 
	I0819 17:09:46.164979   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:09:46.165041   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:09:46.166673   28158 out.go:177] * Starting "ha-227346-m02" control-plane node in "ha-227346" cluster
	I0819 17:09:46.168152   28158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:09:46.168176   28158 cache.go:56] Caching tarball of preloaded images
	I0819 17:09:46.168257   28158 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:09:46.168268   28158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:09:46.168330   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:09:46.168520   28158 start.go:360] acquireMachinesLock for ha-227346-m02: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:09:46.168567   28158 start.go:364] duration metric: took 28.205µs to acquireMachinesLock for "ha-227346-m02"
	I0819 17:09:46.168593   28158 start.go:93] Provisioning new machine with config: &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:09:46.168680   28158 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0819 17:09:46.170178   28158 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 17:09:46.170260   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:09:46.170288   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:09:46.184726   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I0819 17:09:46.185219   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:09:46.185642   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:09:46.185661   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:09:46.186021   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:09:46.186241   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetMachineName
	I0819 17:09:46.186444   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:09:46.186620   28158 start.go:159] libmachine.API.Create for "ha-227346" (driver="kvm2")
	I0819 17:09:46.186643   28158 client.go:168] LocalClient.Create starting
	I0819 17:09:46.186676   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 17:09:46.186715   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:09:46.186732   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:09:46.186807   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 17:09:46.186838   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:09:46.186854   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:09:46.186882   28158 main.go:141] libmachine: Running pre-create checks...
	I0819 17:09:46.186893   28158 main.go:141] libmachine: (ha-227346-m02) Calling .PreCreateCheck
	I0819 17:09:46.187097   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetConfigRaw
	I0819 17:09:46.187488   28158 main.go:141] libmachine: Creating machine...
	I0819 17:09:46.187501   28158 main.go:141] libmachine: (ha-227346-m02) Calling .Create
	I0819 17:09:46.187656   28158 main.go:141] libmachine: (ha-227346-m02) Creating KVM machine...
	I0819 17:09:46.189110   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found existing default KVM network
	I0819 17:09:46.189234   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found existing private KVM network mk-ha-227346
	I0819 17:09:46.189390   28158 main.go:141] libmachine: (ha-227346-m02) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02 ...
	I0819 17:09:46.189435   28158 main.go:141] libmachine: (ha-227346-m02) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 17:09:46.189452   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:46.189357   28513 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:09:46.189540   28158 main.go:141] libmachine: (ha-227346-m02) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 17:09:46.423799   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:46.423674   28513 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa...
	I0819 17:09:46.514853   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:46.514745   28513 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/ha-227346-m02.rawdisk...
	I0819 17:09:46.514876   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Writing magic tar header
	I0819 17:09:46.514886   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Writing SSH key tar header
	I0819 17:09:46.514894   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:46.514850   28513 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02 ...
	I0819 17:09:46.514980   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02
	I0819 17:09:46.514997   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 17:09:46.515005   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02 (perms=drwx------)
	I0819 17:09:46.515012   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:09:46.515024   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 17:09:46.515031   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 17:09:46.515043   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 17:09:46.515049   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 17:09:46.515059   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 17:09:46.515066   28158 main.go:141] libmachine: (ha-227346-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 17:09:46.515074   28158 main.go:141] libmachine: (ha-227346-m02) Creating domain...
	I0819 17:09:46.515099   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 17:09:46.515123   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home/jenkins
	I0819 17:09:46.515139   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Checking permissions on dir: /home
	I0819 17:09:46.515150   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Skipping /home - not owner
	I0819 17:09:46.516140   28158 main.go:141] libmachine: (ha-227346-m02) define libvirt domain using xml: 
	I0819 17:09:46.516155   28158 main.go:141] libmachine: (ha-227346-m02) <domain type='kvm'>
	I0819 17:09:46.516161   28158 main.go:141] libmachine: (ha-227346-m02)   <name>ha-227346-m02</name>
	I0819 17:09:46.516166   28158 main.go:141] libmachine: (ha-227346-m02)   <memory unit='MiB'>2200</memory>
	I0819 17:09:46.516189   28158 main.go:141] libmachine: (ha-227346-m02)   <vcpu>2</vcpu>
	I0819 17:09:46.516206   28158 main.go:141] libmachine: (ha-227346-m02)   <features>
	I0819 17:09:46.516212   28158 main.go:141] libmachine: (ha-227346-m02)     <acpi/>
	I0819 17:09:46.516217   28158 main.go:141] libmachine: (ha-227346-m02)     <apic/>
	I0819 17:09:46.516222   28158 main.go:141] libmachine: (ha-227346-m02)     <pae/>
	I0819 17:09:46.516229   28158 main.go:141] libmachine: (ha-227346-m02)     
	I0819 17:09:46.516234   28158 main.go:141] libmachine: (ha-227346-m02)   </features>
	I0819 17:09:46.516242   28158 main.go:141] libmachine: (ha-227346-m02)   <cpu mode='host-passthrough'>
	I0819 17:09:46.516247   28158 main.go:141] libmachine: (ha-227346-m02)   
	I0819 17:09:46.516252   28158 main.go:141] libmachine: (ha-227346-m02)   </cpu>
	I0819 17:09:46.516257   28158 main.go:141] libmachine: (ha-227346-m02)   <os>
	I0819 17:09:46.516264   28158 main.go:141] libmachine: (ha-227346-m02)     <type>hvm</type>
	I0819 17:09:46.516269   28158 main.go:141] libmachine: (ha-227346-m02)     <boot dev='cdrom'/>
	I0819 17:09:46.516274   28158 main.go:141] libmachine: (ha-227346-m02)     <boot dev='hd'/>
	I0819 17:09:46.516280   28158 main.go:141] libmachine: (ha-227346-m02)     <bootmenu enable='no'/>
	I0819 17:09:46.516290   28158 main.go:141] libmachine: (ha-227346-m02)   </os>
	I0819 17:09:46.516296   28158 main.go:141] libmachine: (ha-227346-m02)   <devices>
	I0819 17:09:46.516306   28158 main.go:141] libmachine: (ha-227346-m02)     <disk type='file' device='cdrom'>
	I0819 17:09:46.516338   28158 main.go:141] libmachine: (ha-227346-m02)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/boot2docker.iso'/>
	I0819 17:09:46.516361   28158 main.go:141] libmachine: (ha-227346-m02)       <target dev='hdc' bus='scsi'/>
	I0819 17:09:46.516374   28158 main.go:141] libmachine: (ha-227346-m02)       <readonly/>
	I0819 17:09:46.516383   28158 main.go:141] libmachine: (ha-227346-m02)     </disk>
	I0819 17:09:46.516398   28158 main.go:141] libmachine: (ha-227346-m02)     <disk type='file' device='disk'>
	I0819 17:09:46.516412   28158 main.go:141] libmachine: (ha-227346-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 17:09:46.516428   28158 main.go:141] libmachine: (ha-227346-m02)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/ha-227346-m02.rawdisk'/>
	I0819 17:09:46.516440   28158 main.go:141] libmachine: (ha-227346-m02)       <target dev='hda' bus='virtio'/>
	I0819 17:09:46.516449   28158 main.go:141] libmachine: (ha-227346-m02)     </disk>
	I0819 17:09:46.516463   28158 main.go:141] libmachine: (ha-227346-m02)     <interface type='network'>
	I0819 17:09:46.516475   28158 main.go:141] libmachine: (ha-227346-m02)       <source network='mk-ha-227346'/>
	I0819 17:09:46.516488   28158 main.go:141] libmachine: (ha-227346-m02)       <model type='virtio'/>
	I0819 17:09:46.516498   28158 main.go:141] libmachine: (ha-227346-m02)     </interface>
	I0819 17:09:46.516509   28158 main.go:141] libmachine: (ha-227346-m02)     <interface type='network'>
	I0819 17:09:46.516525   28158 main.go:141] libmachine: (ha-227346-m02)       <source network='default'/>
	I0819 17:09:46.516537   28158 main.go:141] libmachine: (ha-227346-m02)       <model type='virtio'/>
	I0819 17:09:46.516548   28158 main.go:141] libmachine: (ha-227346-m02)     </interface>
	I0819 17:09:46.516558   28158 main.go:141] libmachine: (ha-227346-m02)     <serial type='pty'>
	I0819 17:09:46.516569   28158 main.go:141] libmachine: (ha-227346-m02)       <target port='0'/>
	I0819 17:09:46.516581   28158 main.go:141] libmachine: (ha-227346-m02)     </serial>
	I0819 17:09:46.516591   28158 main.go:141] libmachine: (ha-227346-m02)     <console type='pty'>
	I0819 17:09:46.516601   28158 main.go:141] libmachine: (ha-227346-m02)       <target type='serial' port='0'/>
	I0819 17:09:46.516617   28158 main.go:141] libmachine: (ha-227346-m02)     </console>
	I0819 17:09:46.516639   28158 main.go:141] libmachine: (ha-227346-m02)     <rng model='virtio'>
	I0819 17:09:46.516651   28158 main.go:141] libmachine: (ha-227346-m02)       <backend model='random'>/dev/random</backend>
	I0819 17:09:46.516663   28158 main.go:141] libmachine: (ha-227346-m02)     </rng>
	I0819 17:09:46.516673   28158 main.go:141] libmachine: (ha-227346-m02)     
	I0819 17:09:46.516683   28158 main.go:141] libmachine: (ha-227346-m02)     
	I0819 17:09:46.516691   28158 main.go:141] libmachine: (ha-227346-m02)   </devices>
	I0819 17:09:46.516714   28158 main.go:141] libmachine: (ha-227346-m02) </domain>
	I0819 17:09:46.516736   28158 main.go:141] libmachine: (ha-227346-m02) 
	I0819 17:09:46.523013   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:a9:0c:a1 in network default
	I0819 17:09:46.523580   28158 main.go:141] libmachine: (ha-227346-m02) Ensuring networks are active...
	I0819 17:09:46.523618   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:46.524219   28158 main.go:141] libmachine: (ha-227346-m02) Ensuring network default is active
	I0819 17:09:46.524528   28158 main.go:141] libmachine: (ha-227346-m02) Ensuring network mk-ha-227346 is active
	I0819 17:09:46.524908   28158 main.go:141] libmachine: (ha-227346-m02) Getting domain xml...
	I0819 17:09:46.525627   28158 main.go:141] libmachine: (ha-227346-m02) Creating domain...
	I0819 17:09:47.735681   28158 main.go:141] libmachine: (ha-227346-m02) Waiting to get IP...
	I0819 17:09:47.736569   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:47.736998   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:47.737018   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:47.736985   28513 retry.go:31] will retry after 188.449394ms: waiting for machine to come up
	I0819 17:09:47.927306   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:47.927798   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:47.927825   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:47.927762   28513 retry.go:31] will retry after 311.299545ms: waiting for machine to come up
	I0819 17:09:48.240293   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:48.240731   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:48.240770   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:48.240687   28513 retry.go:31] will retry after 426.822946ms: waiting for machine to come up
	I0819 17:09:48.669457   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:48.669960   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:48.669991   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:48.669909   28513 retry.go:31] will retry after 460.253566ms: waiting for machine to come up
	I0819 17:09:49.131460   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:49.131973   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:49.132013   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:49.131903   28513 retry.go:31] will retry after 659.325431ms: waiting for machine to come up
	I0819 17:09:49.792742   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:49.793238   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:49.793266   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:49.793188   28513 retry.go:31] will retry after 842.316805ms: waiting for machine to come up
	I0819 17:09:50.637184   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:50.637555   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:50.637581   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:50.637523   28513 retry.go:31] will retry after 891.20218ms: waiting for machine to come up
	I0819 17:09:51.529869   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:51.530353   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:51.530376   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:51.530303   28513 retry.go:31] will retry after 968.497872ms: waiting for machine to come up
	I0819 17:09:52.500332   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:52.500737   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:52.500781   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:52.500683   28513 retry.go:31] will retry after 1.361966722s: waiting for machine to come up
	I0819 17:09:53.864084   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:53.864538   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:53.864574   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:53.864484   28513 retry.go:31] will retry after 1.418071931s: waiting for machine to come up
	I0819 17:09:55.285394   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:55.285847   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:55.285868   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:55.285818   28513 retry.go:31] will retry after 2.811587726s: waiting for machine to come up
	I0819 17:09:58.099399   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:09:58.099879   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:09:58.099905   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:09:58.099837   28513 retry.go:31] will retry after 2.867282911s: waiting for machine to come up
	I0819 17:10:00.970848   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:00.971258   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:10:00.971280   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:10:00.971220   28513 retry.go:31] will retry after 3.969298378s: waiting for machine to come up
	I0819 17:10:04.942401   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:04.942777   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find current IP address of domain ha-227346-m02 in network mk-ha-227346
	I0819 17:10:04.942802   28158 main.go:141] libmachine: (ha-227346-m02) DBG | I0819 17:10:04.942743   28513 retry.go:31] will retry after 5.544139087s: waiting for machine to come up
	I0819 17:10:10.491913   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:10.492372   28158 main.go:141] libmachine: (ha-227346-m02) Found IP for machine: 192.168.39.189
	I0819 17:10:10.492391   28158 main.go:141] libmachine: (ha-227346-m02) Reserving static IP address...
	I0819 17:10:10.492401   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has current primary IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:10.492766   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find host DHCP lease matching {name: "ha-227346-m02", mac: "52:54:00:50:ca:df", ip: "192.168.39.189"} in network mk-ha-227346
	I0819 17:10:10.568180   28158 main.go:141] libmachine: (ha-227346-m02) Reserved static IP address: 192.168.39.189
	I0819 17:10:10.568205   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Getting to WaitForSSH function...
	I0819 17:10:10.568212   28158 main.go:141] libmachine: (ha-227346-m02) Waiting for SSH to be available...
	I0819 17:10:10.570889   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:10.571157   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346
	I0819 17:10:10.571179   28158 main.go:141] libmachine: (ha-227346-m02) DBG | unable to find defined IP address of network mk-ha-227346 interface with MAC address 52:54:00:50:ca:df
	I0819 17:10:10.571304   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Using SSH client type: external
	I0819 17:10:10.571328   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa (-rw-------)
	I0819 17:10:10.571400   28158 main.go:141] libmachine: (ha-227346-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:10:10.571433   28158 main.go:141] libmachine: (ha-227346-m02) DBG | About to run SSH command:
	I0819 17:10:10.571453   28158 main.go:141] libmachine: (ha-227346-m02) DBG | exit 0
	I0819 17:10:10.575374   28158 main.go:141] libmachine: (ha-227346-m02) DBG | SSH cmd err, output: exit status 255: 
	I0819 17:10:10.575401   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 17:10:10.575413   28158 main.go:141] libmachine: (ha-227346-m02) DBG | command : exit 0
	I0819 17:10:10.575421   28158 main.go:141] libmachine: (ha-227346-m02) DBG | err     : exit status 255
	I0819 17:10:10.575432   28158 main.go:141] libmachine: (ha-227346-m02) DBG | output  : 
	I0819 17:10:13.577470   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Getting to WaitForSSH function...
	I0819 17:10:13.579842   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.580251   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:13.580279   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.580397   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Using SSH client type: external
	I0819 17:10:13.580420   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa (-rw-------)
	I0819 17:10:13.580439   28158 main.go:141] libmachine: (ha-227346-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:10:13.580448   28158 main.go:141] libmachine: (ha-227346-m02) DBG | About to run SSH command:
	I0819 17:10:13.580456   28158 main.go:141] libmachine: (ha-227346-m02) DBG | exit 0
	I0819 17:10:13.704776   28158 main.go:141] libmachine: (ha-227346-m02) DBG | SSH cmd err, output: <nil>: 
	I0819 17:10:13.705100   28158 main.go:141] libmachine: (ha-227346-m02) KVM machine creation complete!
	I0819 17:10:13.705424   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetConfigRaw
	I0819 17:10:13.705980   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:13.706159   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:13.706314   28158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 17:10:13.706330   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:10:13.707571   28158 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 17:10:13.707586   28158 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 17:10:13.707594   28158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 17:10:13.707602   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:13.709918   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.710239   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:13.710267   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.710395   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:13.710554   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.710702   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.710857   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:13.711028   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:13.711223   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:13.711235   28158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 17:10:13.815951   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:10:13.815974   28158 main.go:141] libmachine: Detecting the provisioner...
	I0819 17:10:13.815981   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:13.818763   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.819095   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:13.819138   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.819245   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:13.819478   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.819628   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.819756   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:13.819937   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:13.820138   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:13.820149   28158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 17:10:13.925187   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 17:10:13.925286   28158 main.go:141] libmachine: found compatible host: buildroot
	I0819 17:10:13.925301   28158 main.go:141] libmachine: Provisioning with buildroot...
	I0819 17:10:13.925311   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetMachineName
	I0819 17:10:13.925553   28158 buildroot.go:166] provisioning hostname "ha-227346-m02"
	I0819 17:10:13.925592   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetMachineName
	I0819 17:10:13.925779   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:13.928355   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.928693   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:13.928719   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:13.928902   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:13.929053   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.929193   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:13.929351   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:13.929546   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:13.929742   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:13.929763   28158 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-227346-m02 && echo "ha-227346-m02" | sudo tee /etc/hostname
	I0819 17:10:14.046025   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-227346-m02
	
	I0819 17:10:14.046048   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.048692   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.049048   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.049073   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.049308   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:14.049483   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.049636   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.049785   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:14.049959   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:14.050116   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:14.050133   28158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-227346-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-227346-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-227346-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:10:14.165466   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:10:14.165498   28158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:10:14.165519   28158 buildroot.go:174] setting up certificates
	I0819 17:10:14.165533   28158 provision.go:84] configureAuth start
	I0819 17:10:14.165545   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetMachineName
	I0819 17:10:14.165830   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:10:14.168646   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.169139   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.169167   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.169453   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.171899   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.172269   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.172289   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.172450   28158 provision.go:143] copyHostCerts
	I0819 17:10:14.172494   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:10:14.172534   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:10:14.172545   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:10:14.172628   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:10:14.172730   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:10:14.172775   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:10:14.172786   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:10:14.172825   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:10:14.172917   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:10:14.172943   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:10:14.172956   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:10:14.173015   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:10:14.173086   28158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.ha-227346-m02 san=[127.0.0.1 192.168.39.189 ha-227346-m02 localhost minikube]
	I0819 17:10:14.404824   28158 provision.go:177] copyRemoteCerts
	I0819 17:10:14.404882   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:10:14.404904   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.407468   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.408000   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.408026   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.408194   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:14.408394   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.408546   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:14.408688   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	I0819 17:10:14.490366   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 17:10:14.490439   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:10:14.512203   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 17:10:14.512269   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:10:14.533541   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 17:10:14.533607   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 17:10:14.555048   28158 provision.go:87] duration metric: took 389.502363ms to configureAuth
	I0819 17:10:14.555077   28158 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:10:14.555276   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:10:14.555378   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.557985   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.558348   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.558371   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.558519   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:14.558726   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.558897   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.559040   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:14.559174   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:14.559361   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:14.559384   28158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:10:14.822319   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:10:14.822349   28158 main.go:141] libmachine: Checking connection to Docker...
	I0819 17:10:14.822360   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetURL
	I0819 17:10:14.823708   28158 main.go:141] libmachine: (ha-227346-m02) DBG | Using libvirt version 6000000
	I0819 17:10:14.825607   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.825994   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.826023   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.826171   28158 main.go:141] libmachine: Docker is up and running!
	I0819 17:10:14.826186   28158 main.go:141] libmachine: Reticulating splines...
	I0819 17:10:14.826194   28158 client.go:171] duration metric: took 28.639543737s to LocalClient.Create
	I0819 17:10:14.826217   28158 start.go:167] duration metric: took 28.639597444s to libmachine.API.Create "ha-227346"
	I0819 17:10:14.826230   28158 start.go:293] postStartSetup for "ha-227346-m02" (driver="kvm2")
	I0819 17:10:14.826241   28158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:10:14.826271   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:14.826457   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:10:14.826481   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.828693   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.829056   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.829082   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.829188   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:14.829359   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.829476   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:14.829603   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	I0819 17:10:14.910460   28158 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:10:14.914512   28158 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:10:14.914540   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:10:14.914619   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:10:14.914692   28158 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:10:14.914701   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /etc/ssl/certs/178372.pem
	I0819 17:10:14.914804   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:10:14.925300   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:10:14.947332   28158 start.go:296] duration metric: took 121.09158ms for postStartSetup
	I0819 17:10:14.947386   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetConfigRaw
	I0819 17:10:14.947931   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:10:14.950477   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.950907   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.950938   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.951165   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:10:14.951391   28158 start.go:128] duration metric: took 28.782699753s to createHost
	I0819 17:10:14.951414   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:14.953585   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.953904   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:14.953932   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:14.954058   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:14.954230   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.954389   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:14.954524   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:14.954677   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:10:14.954847   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0819 17:10:14.954859   28158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:10:15.061309   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724087415.043658666
	
	I0819 17:10:15.061332   28158 fix.go:216] guest clock: 1724087415.043658666
	I0819 17:10:15.061342   28158 fix.go:229] Guest: 2024-08-19 17:10:15.043658666 +0000 UTC Remote: 2024-08-19 17:10:14.951405072 +0000 UTC m=+70.948138926 (delta=92.253594ms)
	I0819 17:10:15.061358   28158 fix.go:200] guest clock delta is within tolerance: 92.253594ms
	I0819 17:10:15.061363   28158 start.go:83] releasing machines lock for "ha-227346-m02", held for 28.892778383s
	I0819 17:10:15.061380   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:15.061655   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:10:15.064201   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.064623   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:15.064647   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.066928   28158 out.go:177] * Found network options:
	I0819 17:10:15.068459   28158 out.go:177]   - NO_PROXY=192.168.39.205
	W0819 17:10:15.069697   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 17:10:15.069730   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:15.070207   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:15.070390   28158 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:10:15.070516   28158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:10:15.070571   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	W0819 17:10:15.070652   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 17:10:15.070726   28158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:10:15.070748   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:10:15.073465   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.073793   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.073955   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:15.073985   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.074153   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:15.074173   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:15.074154   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:15.074371   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:15.074450   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:10:15.074600   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:10:15.074608   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:15.074740   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:10:15.074781   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	I0819 17:10:15.074855   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	I0819 17:10:15.314729   28158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:10:15.320614   28158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:10:15.320676   28158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:10:15.335455   28158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 17:10:15.335477   28158 start.go:495] detecting cgroup driver to use...
	I0819 17:10:15.335551   28158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:10:15.349950   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:10:15.362294   28158 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:10:15.362354   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:10:15.374285   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:10:15.386522   28158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:10:15.500254   28158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:10:15.668855   28158 docker.go:233] disabling docker service ...
	I0819 17:10:15.668922   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:10:15.683306   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:10:15.695138   28158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:10:15.806495   28158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:10:15.913086   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:10:15.926950   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:10:15.943526   28158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:10:15.943584   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:15.952925   28158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:10:15.952987   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:15.962238   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:15.971415   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:15.980884   28158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:10:15.990330   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:15.999511   28158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:16.014505   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:10:16.023612   28158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:10:16.032033   28158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 17:10:16.032091   28158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 17:10:16.043635   28158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:10:16.052831   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:10:16.153853   28158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:10:16.287924   28158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:10:16.287995   28158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:10:16.292630   28158 start.go:563] Will wait 60s for crictl version
	I0819 17:10:16.292679   28158 ssh_runner.go:195] Run: which crictl
	I0819 17:10:16.296008   28158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:10:16.335502   28158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:10:16.335581   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:10:16.362522   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:10:16.395028   28158 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:10:16.396400   28158 out.go:177]   - env NO_PROXY=192.168.39.205
	I0819 17:10:16.397616   28158 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:10:16.400485   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:16.400833   28158 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:10:00 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:10:16.400855   28158 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:10:16.401116   28158 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:10:16.404903   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:10:16.417153   28158 mustload.go:65] Loading cluster: ha-227346
	I0819 17:10:16.417360   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:10:16.417719   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:10:16.417750   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:10:16.432463   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35121
	I0819 17:10:16.432873   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:10:16.433379   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:10:16.433402   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:10:16.433722   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:10:16.433899   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:10:16.435405   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:10:16.435779   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:10:16.435808   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:10:16.450412   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0819 17:10:16.450873   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:10:16.451278   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:10:16.451295   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:10:16.451630   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:10:16.451796   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:10:16.451959   28158 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346 for IP: 192.168.39.189
	I0819 17:10:16.451973   28158 certs.go:194] generating shared ca certs ...
	I0819 17:10:16.451993   28158 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:10:16.452138   28158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:10:16.452183   28158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:10:16.452195   28158 certs.go:256] generating profile certs ...
	I0819 17:10:16.452284   28158 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key
	I0819 17:10:16.452339   28158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.57156953
	I0819 17:10:16.452355   28158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.57156953 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205 192.168.39.189 192.168.39.254]
	I0819 17:10:16.554898   28158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.57156953 ...
	I0819 17:10:16.554929   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.57156953: {Name:mk89a7010c986f3cf61c1e174f4fde9f10d23b57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:10:16.555128   28158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.57156953 ...
	I0819 17:10:16.555147   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.57156953: {Name:mk5fa5db66f2352166e304769812bf8b73d24529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:10:16.555243   28158 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.57156953 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt
	I0819 17:10:16.555383   28158 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.57156953 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key
	I0819 17:10:16.555505   28158 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key
	I0819 17:10:16.555520   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 17:10:16.555533   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 17:10:16.555546   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 17:10:16.555561   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 17:10:16.555574   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 17:10:16.555588   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 17:10:16.555600   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 17:10:16.555610   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 17:10:16.555656   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:10:16.555683   28158 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:10:16.555692   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:10:16.555712   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:10:16.555741   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:10:16.555775   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:10:16.555831   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:10:16.555870   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem -> /usr/share/ca-certificates/17837.pem
	I0819 17:10:16.555892   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /usr/share/ca-certificates/178372.pem
	I0819 17:10:16.555910   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:10:16.555948   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:10:16.558824   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:10:16.559224   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:10:16.559244   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:10:16.559482   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:10:16.559696   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:10:16.559884   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:10:16.560038   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:10:16.633132   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 17:10:16.638719   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 17:10:16.649310   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 17:10:16.653594   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 17:10:16.663992   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 17:10:16.667976   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 17:10:16.678384   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 17:10:16.682663   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 17:10:16.692280   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 17:10:16.696445   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 17:10:16.705885   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 17:10:16.709510   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 17:10:16.719350   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:10:16.746516   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:10:16.769754   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:10:16.793189   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:10:16.815706   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 17:10:16.838401   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:10:16.860744   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:10:16.885085   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:10:16.908195   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:10:16.930925   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:10:16.952696   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:10:16.976569   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 17:10:16.991528   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 17:10:17.006563   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 17:10:17.021385   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 17:10:17.036452   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 17:10:17.051348   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 17:10:17.067308   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 17:10:17.084583   28158 ssh_runner.go:195] Run: openssl version
	I0819 17:10:17.090258   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:10:17.100790   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:10:17.105246   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:10:17.105294   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:10:17.111091   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:10:17.121772   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:10:17.132799   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:10:17.137305   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:10:17.137352   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:10:17.142630   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:10:17.152428   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:10:17.162391   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:10:17.166590   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:10:17.166637   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:10:17.171943   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:10:17.183460   28158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:10:17.187758   28158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:10:17.187807   28158 kubeadm.go:934] updating node {m02 192.168.39.189 8443 v1.31.0 crio true true} ...
	I0819 17:10:17.187878   28158 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-227346-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:10:17.187900   28158 kube-vip.go:115] generating kube-vip config ...
	I0819 17:10:17.187931   28158 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 17:10:17.203458   28158 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 17:10:17.203539   28158 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 17:10:17.203597   28158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:10:17.213862   28158 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 17:10:17.213921   28158 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 17:10:17.223785   28158 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 17:10:17.223808   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 17:10:17.223885   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 17:10:17.223888   28158 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0819 17:10:17.223921   28158 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0819 17:10:17.227943   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 17:10:17.227966   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 17:10:18.042144   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 17:10:18.042226   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 17:10:18.048250   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 17:10:18.048290   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 17:10:18.177183   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:10:18.215271   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 17:10:18.215370   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 17:10:18.221227   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 17:10:18.221265   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 17:10:18.622342   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 17:10:18.631749   28158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0819 17:10:18.647663   28158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:10:18.664188   28158 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 17:10:18.681184   28158 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 17:10:18.684885   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:10:18.696116   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:10:18.813538   28158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:10:18.832105   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:10:18.832448   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:10:18.832497   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:10:18.847682   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I0819 17:10:18.848098   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:10:18.848538   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:10:18.848561   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:10:18.848869   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:10:18.849075   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:10:18.849201   28158 start.go:317] joinCluster: &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:10:18.849320   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 17:10:18.849344   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:10:18.852504   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:10:18.852978   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:10:18.853003   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:10:18.853160   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:10:18.853361   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:10:18.853535   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:10:18.853696   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:10:18.999093   28158 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:10:18.999140   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dbt7f7.h17s4g2mjf3dg3ww --discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-227346-m02 --control-plane --apiserver-advertise-address=192.168.39.189 --apiserver-bind-port=8443"
	I0819 17:10:41.001845   28158 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dbt7f7.h17s4g2mjf3dg3ww --discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-227346-m02 --control-plane --apiserver-advertise-address=192.168.39.189 --apiserver-bind-port=8443": (22.002675591s)
	I0819 17:10:41.001879   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 17:10:41.465428   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-227346-m02 minikube.k8s.io/updated_at=2024_08_19T17_10_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-227346 minikube.k8s.io/primary=false
	I0819 17:10:41.592904   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-227346-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 17:10:41.715661   28158 start.go:319] duration metric: took 22.866456336s to joinCluster
	I0819 17:10:41.715746   28158 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:10:41.716061   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:10:41.717273   28158 out.go:177] * Verifying Kubernetes components...
	I0819 17:10:41.718628   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:10:41.969118   28158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:10:41.997090   28158 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:10:41.997406   28158 kapi.go:59] client config for ha-227346: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt", KeyFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key", CAFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 17:10:41.997494   28158 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.205:8443
	I0819 17:10:41.997757   28158 node_ready.go:35] waiting up to 6m0s for node "ha-227346-m02" to be "Ready" ...
	I0819 17:10:41.997867   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:41.997878   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:41.997889   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:41.997896   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:42.018719   28158 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0819 17:10:42.498706   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:42.498731   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:42.498742   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:42.498748   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:42.503472   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:10:42.998305   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:42.998328   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:42.998337   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:42.998342   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:43.002110   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:43.497971   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:43.497993   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:43.498004   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:43.498009   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:43.504416   28158 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 17:10:43.998737   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:43.998766   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:43.998778   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:43.998784   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:44.004655   28158 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 17:10:44.005243   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:44.498460   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:44.498497   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:44.498506   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:44.498510   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:44.502097   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:44.998098   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:44.998124   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:44.998136   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:44.998143   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:45.002041   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:45.498316   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:45.498338   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:45.498349   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:45.498354   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:45.502591   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:10:45.998601   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:45.998625   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:45.998633   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:45.998637   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:46.001767   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:46.498824   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:46.498848   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:46.498859   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:46.498867   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:46.503050   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:10:46.503843   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:46.998034   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:46.998055   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:46.998063   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:46.998067   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:47.001237   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:47.498117   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:47.498142   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:47.498149   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:47.498154   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:47.501279   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:47.998889   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:47.998911   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:47.998919   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:47.998923   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:48.002034   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:48.497954   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:48.497978   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:48.497986   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:48.497990   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:48.501461   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:48.997984   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:48.998009   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:48.998020   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:48.998028   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:49.009078   28158 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0819 17:10:49.009709   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:49.498580   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:49.498602   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:49.498609   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:49.498613   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:49.501899   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:49.997947   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:49.997973   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:49.997985   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:49.997990   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:50.002900   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:10:50.498790   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:50.498814   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:50.498825   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:50.498834   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:50.502115   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:50.998060   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:50.998084   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:50.998092   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:50.998096   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:51.001338   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:51.498702   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:51.498724   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:51.498732   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:51.498736   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:51.501967   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:51.502631   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:51.998953   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:51.998980   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:51.998990   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:51.998993   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:52.002432   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:52.498310   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:52.498335   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:52.498350   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:52.498356   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:52.501524   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:52.998631   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:52.998654   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:52.998661   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:52.998664   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:53.002129   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:53.498111   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:53.498133   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:53.498142   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:53.498145   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:53.501106   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:10:53.998417   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:53.998442   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:53.998450   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:53.998454   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:54.001730   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:54.002348   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:54.498533   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:54.498559   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:54.498568   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:54.498572   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:54.501740   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:54.998768   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:54.998795   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:54.998806   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:54.998812   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:55.002045   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:55.498562   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:55.498586   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:55.498594   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:55.498598   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:55.501665   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:55.998689   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:55.998712   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:55.998720   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:55.998725   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:56.002395   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:56.003190   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:56.498884   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:56.498901   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:56.498909   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:56.498916   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:56.502288   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:56.998395   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:56.998418   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:56.998426   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:56.998430   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:57.001613   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:57.498638   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:57.498661   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:57.498669   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:57.498674   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:57.502084   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:57.997914   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:57.997934   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:57.997940   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:57.997944   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:58.000845   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:10:58.498823   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:58.498847   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:58.498857   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:58.498862   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:58.501955   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:58.502615   28158 node_ready.go:53] node "ha-227346-m02" has status "Ready":"False"
	I0819 17:10:58.998498   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:58.998518   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:58.998526   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:58.998531   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:59.001617   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:59.498169   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:59.498192   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:59.498205   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:10:59.498209   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:59.501467   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:10:59.998446   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:10:59.998469   28158 round_trippers.go:469] Request Headers:
	I0819 17:10:59.998477   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:10:59.998480   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.001728   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:00.498624   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:00.498641   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.498648   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.498652   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.501614   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.502154   28158 node_ready.go:49] node "ha-227346-m02" has status "Ready":"True"
	I0819 17:11:00.502172   28158 node_ready.go:38] duration metric: took 18.504391343s for node "ha-227346-m02" to be "Ready" ...
	I0819 17:11:00.502182   28158 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:11:00.502285   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:11:00.502296   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.502306   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.502312   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.506087   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:00.513482   28158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.513559   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-9s77g
	I0819 17:11:00.513569   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.513579   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.513588   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.515942   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.516544   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:00.516557   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.516567   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.516572   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.518878   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.519354   28158 pod_ready.go:93] pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:00.519376   28158 pod_ready.go:82] duration metric: took 5.867708ms for pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.519389   28158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.519447   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-r68td
	I0819 17:11:00.519455   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.519462   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.519470   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.521900   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.522627   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:00.522642   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.522651   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.522656   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.524800   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.525352   28158 pod_ready.go:93] pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:00.525376   28158 pod_ready.go:82] duration metric: took 5.968846ms for pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.525388   28158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.525449   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346
	I0819 17:11:00.525459   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.525469   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.525480   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.527626   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.528068   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:00.528082   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.528089   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.528092   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.530155   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:00.530632   28158 pod_ready.go:93] pod "etcd-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:00.530646   28158 pod_ready.go:82] duration metric: took 5.247627ms for pod "etcd-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.530654   28158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.530705   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346-m02
	I0819 17:11:00.530713   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.530719   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.530725   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.532669   28158 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 17:11:00.533187   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:00.533201   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.533211   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.533217   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.535027   28158 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 17:11:00.535499   28158 pod_ready.go:93] pod "etcd-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:00.535513   28158 pod_ready.go:82] duration metric: took 4.853299ms for pod "etcd-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.535525   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.698905   28158 request.go:632] Waited for 163.321682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346
	I0819 17:11:00.698978   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346
	I0819 17:11:00.698983   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.698993   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.699001   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.702229   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:00.899354   28158 request.go:632] Waited for 196.3754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:00.899411   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:00.899416   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:00.899433   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:00.899451   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:00.903052   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:00.903575   28158 pod_ready.go:93] pod "kube-apiserver-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:00.903590   28158 pod_ready.go:82] duration metric: took 368.059975ms for pod "kube-apiserver-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:00.903608   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:01.098874   28158 request.go:632] Waited for 195.200511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m02
	I0819 17:11:01.098947   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m02
	I0819 17:11:01.098952   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:01.098960   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:01.098968   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:01.102418   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:01.299539   28158 request.go:632] Waited for 196.428899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:01.299627   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:01.299639   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:01.299647   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:01.299652   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:01.302808   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:01.303240   28158 pod_ready.go:93] pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:01.303258   28158 pod_ready.go:82] duration metric: took 399.642843ms for pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:01.303267   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:01.499450   28158 request.go:632] Waited for 196.101424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346
	I0819 17:11:01.499502   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346
	I0819 17:11:01.499507   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:01.499514   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:01.499519   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:01.503016   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:01.699168   28158 request.go:632] Waited for 195.362344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:01.699247   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:01.699254   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:01.699314   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:01.699330   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:01.702600   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:01.703116   28158 pod_ready.go:93] pod "kube-controller-manager-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:01.703135   28158 pod_ready.go:82] duration metric: took 399.862476ms for pod "kube-controller-manager-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:01.703145   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:01.899255   28158 request.go:632] Waited for 196.044062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m02
	I0819 17:11:01.899335   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m02
	I0819 17:11:01.899346   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:01.899359   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:01.899376   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:01.902884   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:02.099095   28158 request.go:632] Waited for 195.34707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:02.099163   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:02.099169   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:02.099176   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:02.099181   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:02.102634   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:02.103074   28158 pod_ready.go:93] pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:02.103093   28158 pod_ready.go:82] duration metric: took 399.942297ms for pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:02.103103   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6lhlp" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:02.299277   28158 request.go:632] Waited for 196.111667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lhlp
	I0819 17:11:02.299333   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lhlp
	I0819 17:11:02.299338   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:02.299347   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:02.299350   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:02.302630   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:02.499585   28158 request.go:632] Waited for 196.381609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:02.499642   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:02.499647   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:02.499654   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:02.499658   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:02.502762   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:02.503444   28158 pod_ready.go:93] pod "kube-proxy-6lhlp" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:02.503468   28158 pod_ready.go:82] duration metric: took 400.355898ms for pod "kube-proxy-6lhlp" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:02.503480   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9xpm4" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:02.699428   28158 request.go:632] Waited for 195.87825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xpm4
	I0819 17:11:02.699509   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xpm4
	I0819 17:11:02.699517   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:02.699525   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:02.699529   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:02.703997   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:11:02.899114   28158 request.go:632] Waited for 194.378055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:02.899179   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:02.899188   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:02.899199   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:02.899215   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:02.902614   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:02.903078   28158 pod_ready.go:93] pod "kube-proxy-9xpm4" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:02.903138   28158 pod_ready.go:82] duration metric: took 399.606177ms for pod "kube-proxy-9xpm4" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:02.903157   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:03.099330   28158 request.go:632] Waited for 196.104442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346
	I0819 17:11:03.099431   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346
	I0819 17:11:03.099443   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.099454   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.099461   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.103207   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:03.299483   28158 request.go:632] Waited for 195.412597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:03.299551   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:11:03.299560   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.299585   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.299607   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.302800   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:03.303465   28158 pod_ready.go:93] pod "kube-scheduler-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:03.303484   28158 pod_ready.go:82] duration metric: took 400.318392ms for pod "kube-scheduler-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:03.303497   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:03.499637   28158 request.go:632] Waited for 196.072281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m02
	I0819 17:11:03.499711   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m02
	I0819 17:11:03.499717   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.499724   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.499728   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.502937   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:03.698798   28158 request.go:632] Waited for 195.290311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:03.698880   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:11:03.698887   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.698894   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.698902   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.702079   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:03.702775   28158 pod_ready.go:93] pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:11:03.702792   28158 pod_ready.go:82] duration metric: took 399.285458ms for pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:11:03.702803   28158 pod_ready.go:39] duration metric: took 3.200583312s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:11:03.702815   28158 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:11:03.702862   28158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:11:03.717359   28158 api_server.go:72] duration metric: took 22.001580434s to wait for apiserver process to appear ...
	I0819 17:11:03.717390   28158 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:11:03.717410   28158 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0819 17:11:03.722002   28158 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0819 17:11:03.722070   28158 round_trippers.go:463] GET https://192.168.39.205:8443/version
	I0819 17:11:03.722081   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.722091   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.722099   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.722965   28158 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0819 17:11:03.723083   28158 api_server.go:141] control plane version: v1.31.0
	I0819 17:11:03.723100   28158 api_server.go:131] duration metric: took 5.703682ms to wait for apiserver health ...
	I0819 17:11:03.723108   28158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:11:03.899648   28158 request.go:632] Waited for 176.468967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:11:03.899727   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:11:03.899735   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:03.899749   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:03.899757   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:03.904179   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:11:03.909765   28158 system_pods.go:59] 17 kube-system pods found
	I0819 17:11:03.909792   28158 system_pods.go:61] "coredns-6f6b679f8f-9s77g" [28ea7cc3-2a78-4b29-82c7-7a028d357471] Running
	I0819 17:11:03.909796   28158 system_pods.go:61] "coredns-6f6b679f8f-r68td" [e48b2c24-94f7-4ca4-8f99-420706cd0cb3] Running
	I0819 17:11:03.909800   28158 system_pods.go:61] "etcd-ha-227346" [b9aafb60-6b1d-4248-8f98-d01e70c686d5] Running
	I0819 17:11:03.909804   28158 system_pods.go:61] "etcd-ha-227346-m02" [f1063ae2-aa38-45f3-836e-656af135b070] Running
	I0819 17:11:03.909807   28158 system_pods.go:61] "kindnet-lwjmd" [55731455-5f1e-4499-ae63-a8ad06f5553f] Running
	I0819 17:11:03.909811   28158 system_pods.go:61] "kindnet-mk55z" [74059a09-c1fc-4d9f-a890-1f8bfa8fff1b] Running
	I0819 17:11:03.909814   28158 system_pods.go:61] "kube-apiserver-ha-227346" [31f54ee5-872e-42cc-88b9-19a4d827370f] Running
	I0819 17:11:03.909817   28158 system_pods.go:61] "kube-apiserver-ha-227346-m02" [d5a4d799-e30b-4614-9d15-4312f9842feb] Running
	I0819 17:11:03.909821   28158 system_pods.go:61] "kube-controller-manager-ha-227346" [a9cc90a6-4c65-4606-8dd0-f38d3910ea72] Running
	I0819 17:11:03.909825   28158 system_pods.go:61] "kube-controller-manager-ha-227346-m02" [ecd218d3-0756-49e0-8f38-d12e877f807b] Running
	I0819 17:11:03.909828   28158 system_pods.go:61] "kube-proxy-6lhlp" [59fb9bb4-5dc4-421b-a00f-941b25c16b20] Running
	I0819 17:11:03.909832   28158 system_pods.go:61] "kube-proxy-9xpm4" [56e3f9ad-e32e-4a45-9184-72fd5076b2f7] Running
	I0819 17:11:03.909835   28158 system_pods.go:61] "kube-scheduler-ha-227346" [35d1cd4d-b090-4459-9872-e6f669045e2b] Running
	I0819 17:11:03.909838   28158 system_pods.go:61] "kube-scheduler-ha-227346-m02" [4b6874b9-1d5d-4b3b-8d4b-022ef04c6a3e] Running
	I0819 17:11:03.909841   28158 system_pods.go:61] "kube-vip-ha-227346" [0f27551d-8d73-4f32-8f52-048bb3dfa992] Running
	I0819 17:11:03.909844   28158 system_pods.go:61] "kube-vip-ha-227346-m02" [ecaf3a81-a97d-42a0-b35d-c3c6c84efb21] Running
	I0819 17:11:03.909847   28158 system_pods.go:61] "storage-provisioner" [f4ed502e-5b16-4a13-9e5f-c1d271bea40b] Running
	I0819 17:11:03.909855   28158 system_pods.go:74] duration metric: took 186.742562ms to wait for pod list to return data ...
	I0819 17:11:03.909862   28158 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:11:04.099680   28158 request.go:632] Waited for 189.755136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/default/serviceaccounts
	I0819 17:11:04.099732   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/default/serviceaccounts
	I0819 17:11:04.099737   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:04.099744   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:04.099749   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:04.103334   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:04.103566   28158 default_sa.go:45] found service account: "default"
	I0819 17:11:04.103583   28158 default_sa.go:55] duration metric: took 193.71521ms for default service account to be created ...
	I0819 17:11:04.103593   28158 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:11:04.299116   28158 request.go:632] Waited for 195.437455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:11:04.299188   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:11:04.299195   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:04.299203   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:04.299216   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:04.303053   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:04.308059   28158 system_pods.go:86] 17 kube-system pods found
	I0819 17:11:04.308087   28158 system_pods.go:89] "coredns-6f6b679f8f-9s77g" [28ea7cc3-2a78-4b29-82c7-7a028d357471] Running
	I0819 17:11:04.308093   28158 system_pods.go:89] "coredns-6f6b679f8f-r68td" [e48b2c24-94f7-4ca4-8f99-420706cd0cb3] Running
	I0819 17:11:04.308097   28158 system_pods.go:89] "etcd-ha-227346" [b9aafb60-6b1d-4248-8f98-d01e70c686d5] Running
	I0819 17:11:04.308101   28158 system_pods.go:89] "etcd-ha-227346-m02" [f1063ae2-aa38-45f3-836e-656af135b070] Running
	I0819 17:11:04.308105   28158 system_pods.go:89] "kindnet-lwjmd" [55731455-5f1e-4499-ae63-a8ad06f5553f] Running
	I0819 17:11:04.308108   28158 system_pods.go:89] "kindnet-mk55z" [74059a09-c1fc-4d9f-a890-1f8bfa8fff1b] Running
	I0819 17:11:04.308113   28158 system_pods.go:89] "kube-apiserver-ha-227346" [31f54ee5-872e-42cc-88b9-19a4d827370f] Running
	I0819 17:11:04.308117   28158 system_pods.go:89] "kube-apiserver-ha-227346-m02" [d5a4d799-e30b-4614-9d15-4312f9842feb] Running
	I0819 17:11:04.308121   28158 system_pods.go:89] "kube-controller-manager-ha-227346" [a9cc90a6-4c65-4606-8dd0-f38d3910ea72] Running
	I0819 17:11:04.308124   28158 system_pods.go:89] "kube-controller-manager-ha-227346-m02" [ecd218d3-0756-49e0-8f38-d12e877f807b] Running
	I0819 17:11:04.308127   28158 system_pods.go:89] "kube-proxy-6lhlp" [59fb9bb4-5dc4-421b-a00f-941b25c16b20] Running
	I0819 17:11:04.308131   28158 system_pods.go:89] "kube-proxy-9xpm4" [56e3f9ad-e32e-4a45-9184-72fd5076b2f7] Running
	I0819 17:11:04.308134   28158 system_pods.go:89] "kube-scheduler-ha-227346" [35d1cd4d-b090-4459-9872-e6f669045e2b] Running
	I0819 17:11:04.308137   28158 system_pods.go:89] "kube-scheduler-ha-227346-m02" [4b6874b9-1d5d-4b3b-8d4b-022ef04c6a3e] Running
	I0819 17:11:04.308140   28158 system_pods.go:89] "kube-vip-ha-227346" [0f27551d-8d73-4f32-8f52-048bb3dfa992] Running
	I0819 17:11:04.308144   28158 system_pods.go:89] "kube-vip-ha-227346-m02" [ecaf3a81-a97d-42a0-b35d-c3c6c84efb21] Running
	I0819 17:11:04.308147   28158 system_pods.go:89] "storage-provisioner" [f4ed502e-5b16-4a13-9e5f-c1d271bea40b] Running
	I0819 17:11:04.308153   28158 system_pods.go:126] duration metric: took 204.542478ms to wait for k8s-apps to be running ...
	I0819 17:11:04.308162   28158 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:11:04.308204   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:11:04.324047   28158 system_svc.go:56] duration metric: took 15.875431ms WaitForService to wait for kubelet
	I0819 17:11:04.324083   28158 kubeadm.go:582] duration metric: took 22.608307073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:11:04.324105   28158 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:11:04.499529   28158 request.go:632] Waited for 175.342422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes
	I0819 17:11:04.499596   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes
	I0819 17:11:04.499609   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:04.499617   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:04.499621   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:04.503453   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:04.504089   28158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:11:04.504111   28158 node_conditions.go:123] node cpu capacity is 2
	I0819 17:11:04.504122   28158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:11:04.504126   28158 node_conditions.go:123] node cpu capacity is 2
	I0819 17:11:04.504131   28158 node_conditions.go:105] duration metric: took 180.020079ms to run NodePressure ...
	I0819 17:11:04.504143   28158 start.go:241] waiting for startup goroutines ...
	I0819 17:11:04.504173   28158 start.go:255] writing updated cluster config ...
	I0819 17:11:04.506186   28158 out.go:201] 
	I0819 17:11:04.507575   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:11:04.507676   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:11:04.509144   28158 out.go:177] * Starting "ha-227346-m03" control-plane node in "ha-227346" cluster
	I0819 17:11:04.510145   28158 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:11:04.510164   28158 cache.go:56] Caching tarball of preloaded images
	I0819 17:11:04.510253   28158 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:11:04.510264   28158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:11:04.510345   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:11:04.510515   28158 start.go:360] acquireMachinesLock for ha-227346-m03: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:11:04.510555   28158 start.go:364] duration metric: took 22.476µs to acquireMachinesLock for "ha-227346-m03"
	I0819 17:11:04.510572   28158 start.go:93] Provisioning new machine with config: &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:11:04.510664   28158 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0819 17:11:04.512151   28158 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 17:11:04.512219   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:11:04.512249   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:11:04.527050   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36431
	I0819 17:11:04.527528   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:11:04.527955   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:11:04.527976   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:11:04.528289   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:11:04.528487   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetMachineName
	I0819 17:11:04.528677   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:04.528837   28158 start.go:159] libmachine.API.Create for "ha-227346" (driver="kvm2")
	I0819 17:11:04.528860   28158 client.go:168] LocalClient.Create starting
	I0819 17:11:04.528894   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 17:11:04.528931   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:11:04.528948   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:11:04.529013   28158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 17:11:04.529036   28158 main.go:141] libmachine: Decoding PEM data...
	I0819 17:11:04.529046   28158 main.go:141] libmachine: Parsing certificate...
	I0819 17:11:04.529070   28158 main.go:141] libmachine: Running pre-create checks...
	I0819 17:11:04.529083   28158 main.go:141] libmachine: (ha-227346-m03) Calling .PreCreateCheck
	I0819 17:11:04.529286   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetConfigRaw
	I0819 17:11:04.529646   28158 main.go:141] libmachine: Creating machine...
	I0819 17:11:04.529660   28158 main.go:141] libmachine: (ha-227346-m03) Calling .Create
	I0819 17:11:04.529777   28158 main.go:141] libmachine: (ha-227346-m03) Creating KVM machine...
	I0819 17:11:04.530855   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found existing default KVM network
	I0819 17:11:04.530938   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found existing private KVM network mk-ha-227346
	I0819 17:11:04.531058   28158 main.go:141] libmachine: (ha-227346-m03) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03 ...
	I0819 17:11:04.531080   28158 main.go:141] libmachine: (ha-227346-m03) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 17:11:04.531136   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:04.531043   28924 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:11:04.531228   28158 main.go:141] libmachine: (ha-227346-m03) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 17:11:04.755830   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:04.755704   28924 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa...
	I0819 17:11:04.872298   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:04.872159   28924 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/ha-227346-m03.rawdisk...
	I0819 17:11:04.872340   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Writing magic tar header
	I0819 17:11:04.872357   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Writing SSH key tar header
	I0819 17:11:04.872382   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:04.872324   28924 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03 ...
	I0819 17:11:04.872510   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03
	I0819 17:11:04.872530   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03 (perms=drwx------)
	I0819 17:11:04.872538   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 17:11:04.872553   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:11:04.872564   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 17:11:04.872584   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 17:11:04.872596   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 17:11:04.872601   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home/jenkins
	I0819 17:11:04.872611   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 17:11:04.872623   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 17:11:04.872636   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Checking permissions on dir: /home
	I0819 17:11:04.872649   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 17:11:04.872660   28158 main.go:141] libmachine: (ha-227346-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 17:11:04.872671   28158 main.go:141] libmachine: (ha-227346-m03) Creating domain...
	I0819 17:11:04.872682   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Skipping /home - not owner
	I0819 17:11:04.873536   28158 main.go:141] libmachine: (ha-227346-m03) define libvirt domain using xml: 
	I0819 17:11:04.873558   28158 main.go:141] libmachine: (ha-227346-m03) <domain type='kvm'>
	I0819 17:11:04.873569   28158 main.go:141] libmachine: (ha-227346-m03)   <name>ha-227346-m03</name>
	I0819 17:11:04.873575   28158 main.go:141] libmachine: (ha-227346-m03)   <memory unit='MiB'>2200</memory>
	I0819 17:11:04.873586   28158 main.go:141] libmachine: (ha-227346-m03)   <vcpu>2</vcpu>
	I0819 17:11:04.873600   28158 main.go:141] libmachine: (ha-227346-m03)   <features>
	I0819 17:11:04.873607   28158 main.go:141] libmachine: (ha-227346-m03)     <acpi/>
	I0819 17:11:04.873612   28158 main.go:141] libmachine: (ha-227346-m03)     <apic/>
	I0819 17:11:04.873624   28158 main.go:141] libmachine: (ha-227346-m03)     <pae/>
	I0819 17:11:04.873634   28158 main.go:141] libmachine: (ha-227346-m03)     
	I0819 17:11:04.873661   28158 main.go:141] libmachine: (ha-227346-m03)   </features>
	I0819 17:11:04.873681   28158 main.go:141] libmachine: (ha-227346-m03)   <cpu mode='host-passthrough'>
	I0819 17:11:04.873691   28158 main.go:141] libmachine: (ha-227346-m03)   
	I0819 17:11:04.873700   28158 main.go:141] libmachine: (ha-227346-m03)   </cpu>
	I0819 17:11:04.873710   28158 main.go:141] libmachine: (ha-227346-m03)   <os>
	I0819 17:11:04.873720   28158 main.go:141] libmachine: (ha-227346-m03)     <type>hvm</type>
	I0819 17:11:04.873733   28158 main.go:141] libmachine: (ha-227346-m03)     <boot dev='cdrom'/>
	I0819 17:11:04.873743   28158 main.go:141] libmachine: (ha-227346-m03)     <boot dev='hd'/>
	I0819 17:11:04.873752   28158 main.go:141] libmachine: (ha-227346-m03)     <bootmenu enable='no'/>
	I0819 17:11:04.873761   28158 main.go:141] libmachine: (ha-227346-m03)   </os>
	I0819 17:11:04.873770   28158 main.go:141] libmachine: (ha-227346-m03)   <devices>
	I0819 17:11:04.873781   28158 main.go:141] libmachine: (ha-227346-m03)     <disk type='file' device='cdrom'>
	I0819 17:11:04.873799   28158 main.go:141] libmachine: (ha-227346-m03)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/boot2docker.iso'/>
	I0819 17:11:04.873816   28158 main.go:141] libmachine: (ha-227346-m03)       <target dev='hdc' bus='scsi'/>
	I0819 17:11:04.873826   28158 main.go:141] libmachine: (ha-227346-m03)       <readonly/>
	I0819 17:11:04.873834   28158 main.go:141] libmachine: (ha-227346-m03)     </disk>
	I0819 17:11:04.873845   28158 main.go:141] libmachine: (ha-227346-m03)     <disk type='file' device='disk'>
	I0819 17:11:04.873861   28158 main.go:141] libmachine: (ha-227346-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 17:11:04.873908   28158 main.go:141] libmachine: (ha-227346-m03)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/ha-227346-m03.rawdisk'/>
	I0819 17:11:04.873919   28158 main.go:141] libmachine: (ha-227346-m03)       <target dev='hda' bus='virtio'/>
	I0819 17:11:04.873925   28158 main.go:141] libmachine: (ha-227346-m03)     </disk>
	I0819 17:11:04.873946   28158 main.go:141] libmachine: (ha-227346-m03)     <interface type='network'>
	I0819 17:11:04.873962   28158 main.go:141] libmachine: (ha-227346-m03)       <source network='mk-ha-227346'/>
	I0819 17:11:04.873970   28158 main.go:141] libmachine: (ha-227346-m03)       <model type='virtio'/>
	I0819 17:11:04.873976   28158 main.go:141] libmachine: (ha-227346-m03)     </interface>
	I0819 17:11:04.873985   28158 main.go:141] libmachine: (ha-227346-m03)     <interface type='network'>
	I0819 17:11:04.873995   28158 main.go:141] libmachine: (ha-227346-m03)       <source network='default'/>
	I0819 17:11:04.874008   28158 main.go:141] libmachine: (ha-227346-m03)       <model type='virtio'/>
	I0819 17:11:04.874021   28158 main.go:141] libmachine: (ha-227346-m03)     </interface>
	I0819 17:11:04.874057   28158 main.go:141] libmachine: (ha-227346-m03)     <serial type='pty'>
	I0819 17:11:04.874089   28158 main.go:141] libmachine: (ha-227346-m03)       <target port='0'/>
	I0819 17:11:04.874105   28158 main.go:141] libmachine: (ha-227346-m03)     </serial>
	I0819 17:11:04.874116   28158 main.go:141] libmachine: (ha-227346-m03)     <console type='pty'>
	I0819 17:11:04.874129   28158 main.go:141] libmachine: (ha-227346-m03)       <target type='serial' port='0'/>
	I0819 17:11:04.874139   28158 main.go:141] libmachine: (ha-227346-m03)     </console>
	I0819 17:11:04.874150   28158 main.go:141] libmachine: (ha-227346-m03)     <rng model='virtio'>
	I0819 17:11:04.874166   28158 main.go:141] libmachine: (ha-227346-m03)       <backend model='random'>/dev/random</backend>
	I0819 17:11:04.874185   28158 main.go:141] libmachine: (ha-227346-m03)     </rng>
	I0819 17:11:04.874203   28158 main.go:141] libmachine: (ha-227346-m03)     
	I0819 17:11:04.874219   28158 main.go:141] libmachine: (ha-227346-m03)     
	I0819 17:11:04.874229   28158 main.go:141] libmachine: (ha-227346-m03)   </devices>
	I0819 17:11:04.874242   28158 main.go:141] libmachine: (ha-227346-m03) </domain>
	I0819 17:11:04.874250   28158 main.go:141] libmachine: (ha-227346-m03) 
	I0819 17:11:04.880861   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:55:cd:c0 in network default
	I0819 17:11:04.881422   28158 main.go:141] libmachine: (ha-227346-m03) Ensuring networks are active...
	I0819 17:11:04.881441   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:04.882176   28158 main.go:141] libmachine: (ha-227346-m03) Ensuring network default is active
	I0819 17:11:04.882447   28158 main.go:141] libmachine: (ha-227346-m03) Ensuring network mk-ha-227346 is active
	I0819 17:11:04.882807   28158 main.go:141] libmachine: (ha-227346-m03) Getting domain xml...
	I0819 17:11:04.883659   28158 main.go:141] libmachine: (ha-227346-m03) Creating domain...
	I0819 17:11:06.122917   28158 main.go:141] libmachine: (ha-227346-m03) Waiting to get IP...
	I0819 17:11:06.123667   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:06.124078   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:06.124129   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:06.124069   28924 retry.go:31] will retry after 273.06976ms: waiting for machine to come up
	I0819 17:11:06.398662   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:06.399173   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:06.399204   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:06.399134   28924 retry.go:31] will retry after 366.928672ms: waiting for machine to come up
	I0819 17:11:06.767695   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:06.768082   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:06.768114   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:06.768030   28924 retry.go:31] will retry after 471.347113ms: waiting for machine to come up
	I0819 17:11:07.240569   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:07.241136   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:07.241163   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:07.241101   28924 retry.go:31] will retry after 537.842776ms: waiting for machine to come up
	I0819 17:11:07.780975   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:07.781443   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:07.781498   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:07.781419   28924 retry.go:31] will retry after 459.754858ms: waiting for machine to come up
	I0819 17:11:08.243095   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:08.243527   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:08.243550   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:08.243481   28924 retry.go:31] will retry after 601.291451ms: waiting for machine to come up
	I0819 17:11:08.846140   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:08.846555   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:08.846581   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:08.846507   28924 retry.go:31] will retry after 924.867302ms: waiting for machine to come up
	I0819 17:11:09.772643   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:09.773162   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:09.773198   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:09.773119   28924 retry.go:31] will retry after 1.203805195s: waiting for machine to come up
	I0819 17:11:10.978982   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:10.979464   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:10.979486   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:10.979427   28924 retry.go:31] will retry after 1.337086668s: waiting for machine to come up
	I0819 17:11:12.317717   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:12.318172   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:12.318199   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:12.318133   28924 retry.go:31] will retry after 1.894350017s: waiting for machine to come up
	I0819 17:11:14.214577   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:14.215034   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:14.215108   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:14.215008   28924 retry.go:31] will retry after 2.066719812s: waiting for machine to come up
	I0819 17:11:16.283726   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:16.284144   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:16.284165   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:16.284107   28924 retry.go:31] will retry after 3.274271926s: waiting for machine to come up
	I0819 17:11:19.559337   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:19.559703   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find current IP address of domain ha-227346-m03 in network mk-ha-227346
	I0819 17:11:19.559726   28158 main.go:141] libmachine: (ha-227346-m03) DBG | I0819 17:11:19.559661   28924 retry.go:31] will retry after 4.33036353s: waiting for machine to come up
	I0819 17:11:23.894798   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:23.895283   28158 main.go:141] libmachine: (ha-227346-m03) Found IP for machine: 192.168.39.95
	I0819 17:11:23.895309   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has current primary IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:23.895320   28158 main.go:141] libmachine: (ha-227346-m03) Reserving static IP address...
	I0819 17:11:23.895646   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find host DHCP lease matching {name: "ha-227346-m03", mac: "52:54:00:9c:a7:7a", ip: "192.168.39.95"} in network mk-ha-227346
	I0819 17:11:23.970223   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Getting to WaitForSSH function...
	I0819 17:11:23.970255   28158 main.go:141] libmachine: (ha-227346-m03) Reserved static IP address: 192.168.39.95
	I0819 17:11:23.970269   28158 main.go:141] libmachine: (ha-227346-m03) Waiting for SSH to be available...
	I0819 17:11:23.972464   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:23.972812   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346
	I0819 17:11:23.972838   28158 main.go:141] libmachine: (ha-227346-m03) DBG | unable to find defined IP address of network mk-ha-227346 interface with MAC address 52:54:00:9c:a7:7a
	I0819 17:11:23.972973   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Using SSH client type: external
	I0819 17:11:23.972999   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa (-rw-------)
	I0819 17:11:23.973028   28158 main.go:141] libmachine: (ha-227346-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:11:23.973042   28158 main.go:141] libmachine: (ha-227346-m03) DBG | About to run SSH command:
	I0819 17:11:23.973061   28158 main.go:141] libmachine: (ha-227346-m03) DBG | exit 0
	I0819 17:11:23.976368   28158 main.go:141] libmachine: (ha-227346-m03) DBG | SSH cmd err, output: exit status 255: 
	I0819 17:11:23.976395   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 17:11:23.976402   28158 main.go:141] libmachine: (ha-227346-m03) DBG | command : exit 0
	I0819 17:11:23.976407   28158 main.go:141] libmachine: (ha-227346-m03) DBG | err     : exit status 255
	I0819 17:11:23.976415   28158 main.go:141] libmachine: (ha-227346-m03) DBG | output  : 
	I0819 17:11:26.978531   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Getting to WaitForSSH function...
	I0819 17:11:26.981090   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:26.981502   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:26.981530   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:26.981647   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Using SSH client type: external
	I0819 17:11:26.981676   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa (-rw-------)
	I0819 17:11:26.981719   28158 main.go:141] libmachine: (ha-227346-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:11:26.981742   28158 main.go:141] libmachine: (ha-227346-m03) DBG | About to run SSH command:
	I0819 17:11:26.981773   28158 main.go:141] libmachine: (ha-227346-m03) DBG | exit 0
	I0819 17:11:27.104490   28158 main.go:141] libmachine: (ha-227346-m03) DBG | SSH cmd err, output: <nil>: 
	I0819 17:11:27.104709   28158 main.go:141] libmachine: (ha-227346-m03) KVM machine creation complete!
	I0819 17:11:27.105021   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetConfigRaw
	I0819 17:11:27.105548   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:27.105770   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:27.105906   28158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 17:11:27.105917   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:11:27.107064   28158 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 17:11:27.107078   28158 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 17:11:27.107083   28158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 17:11:27.107090   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.109178   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.109537   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.109559   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.109754   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.109922   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.110064   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.110202   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.110340   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:27.110527   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:27.110537   28158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 17:11:27.211831   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:11:27.211858   28158 main.go:141] libmachine: Detecting the provisioner...
	I0819 17:11:27.211869   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.214484   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.214860   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.214883   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.215082   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.215270   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.215403   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.215517   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.215658   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:27.215852   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:27.215866   28158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 17:11:27.316843   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 17:11:27.316915   28158 main.go:141] libmachine: found compatible host: buildroot
	I0819 17:11:27.316926   28158 main.go:141] libmachine: Provisioning with buildroot...
	I0819 17:11:27.316937   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetMachineName
	I0819 17:11:27.317178   28158 buildroot.go:166] provisioning hostname "ha-227346-m03"
	I0819 17:11:27.317202   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetMachineName
	I0819 17:11:27.317362   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.319777   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.320082   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.320104   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.320215   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.320404   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.320573   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.320692   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.320840   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:27.321003   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:27.321015   28158 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-227346-m03 && echo "ha-227346-m03" | sudo tee /etc/hostname
	I0819 17:11:27.440791   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-227346-m03
	
	I0819 17:11:27.440819   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.443593   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.443926   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.443953   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.444162   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.444382   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.444543   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.444686   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.444854   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:27.445019   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:27.445048   28158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-227346-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-227346-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-227346-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:11:27.557081   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:11:27.557106   28158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:11:27.557123   28158 buildroot.go:174] setting up certificates
	I0819 17:11:27.557131   28158 provision.go:84] configureAuth start
	I0819 17:11:27.557139   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetMachineName
	I0819 17:11:27.557392   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:11:27.559867   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.560234   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.560258   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.560475   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.562756   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.563102   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.563123   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.563271   28158 provision.go:143] copyHostCerts
	I0819 17:11:27.563305   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:11:27.563344   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:11:27.563355   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:11:27.563440   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:11:27.563586   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:11:27.563616   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:11:27.563626   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:11:27.563669   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:11:27.563741   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:11:27.563758   28158 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:11:27.563764   28158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:11:27.563787   28158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:11:27.563848   28158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.ha-227346-m03 san=[127.0.0.1 192.168.39.95 ha-227346-m03 localhost minikube]
	I0819 17:11:27.713684   28158 provision.go:177] copyRemoteCerts
	I0819 17:11:27.713736   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:11:27.713778   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.716487   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.716844   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.716891   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.717077   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.717267   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.717458   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.717577   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:11:27.798375   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 17:11:27.798443   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:11:27.820717   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 17:11:27.820818   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:11:27.843998   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 17:11:27.844066   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 17:11:27.867190   28158 provision.go:87] duration metric: took 310.049173ms to configureAuth
	I0819 17:11:27.867217   28158 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:11:27.867595   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:11:27.867692   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:27.870487   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.870891   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:27.870916   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:27.871163   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:27.871338   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.871512   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:27.871665   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:27.871846   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:27.872026   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:27.872042   28158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:11:28.136267   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:11:28.136303   28158 main.go:141] libmachine: Checking connection to Docker...
	I0819 17:11:28.136314   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetURL
	I0819 17:11:28.137715   28158 main.go:141] libmachine: (ha-227346-m03) DBG | Using libvirt version 6000000
	I0819 17:11:28.139969   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.140395   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.140426   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.140684   28158 main.go:141] libmachine: Docker is up and running!
	I0819 17:11:28.140699   28158 main.go:141] libmachine: Reticulating splines...
	I0819 17:11:28.140708   28158 client.go:171] duration metric: took 23.611840185s to LocalClient.Create
	I0819 17:11:28.140739   28158 start.go:167] duration metric: took 23.611901411s to libmachine.API.Create "ha-227346"
	I0819 17:11:28.140765   28158 start.go:293] postStartSetup for "ha-227346-m03" (driver="kvm2")
	I0819 17:11:28.140779   28158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:11:28.140814   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:28.141056   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:11:28.141077   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:28.143448   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.143814   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.143842   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.143991   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:28.144186   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:28.144348   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:28.144488   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:11:28.226560   28158 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:11:28.230665   28158 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:11:28.230692   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:11:28.230774   28158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:11:28.230867   28158 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:11:28.230878   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /etc/ssl/certs/178372.pem
	I0819 17:11:28.230983   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:11:28.239824   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:11:28.262487   28158 start.go:296] duration metric: took 121.71003ms for postStartSetup
	I0819 17:11:28.262530   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetConfigRaw
	I0819 17:11:28.263093   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:11:28.265528   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.265920   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.265949   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.266175   28158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:11:28.266359   28158 start.go:128] duration metric: took 23.755685114s to createHost
	I0819 17:11:28.266382   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:28.268689   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.269052   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.269073   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.269206   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:28.269387   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:28.269516   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:28.269625   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:28.269738   28158 main.go:141] libmachine: Using SSH client type: native
	I0819 17:11:28.269892   28158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0819 17:11:28.269902   28158 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:11:28.373217   28158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724087488.351450125
	
	I0819 17:11:28.373241   28158 fix.go:216] guest clock: 1724087488.351450125
	I0819 17:11:28.373252   28158 fix.go:229] Guest: 2024-08-19 17:11:28.351450125 +0000 UTC Remote: 2024-08-19 17:11:28.266370008 +0000 UTC m=+144.263103862 (delta=85.080117ms)
	I0819 17:11:28.373270   28158 fix.go:200] guest clock delta is within tolerance: 85.080117ms
	I0819 17:11:28.373276   28158 start.go:83] releasing machines lock for "ha-227346-m03", held for 23.862712507s
	I0819 17:11:28.373302   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:28.373639   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:11:28.376587   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.377067   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.377097   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.379453   28158 out.go:177] * Found network options:
	I0819 17:11:28.380910   28158 out.go:177]   - NO_PROXY=192.168.39.205,192.168.39.189
	W0819 17:11:28.382103   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 17:11:28.382127   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 17:11:28.382144   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:28.382732   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:28.382933   28158 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:11:28.383029   28158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:11:28.383063   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	W0819 17:11:28.383088   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 17:11:28.383126   28158 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 17:11:28.383190   28158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:11:28.383209   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:11:28.385767   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.386024   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.386133   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.386157   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.386257   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:28.386375   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:28.386398   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:28.386428   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:28.386557   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:11:28.386619   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:28.386748   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:11:28.386778   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:11:28.386845   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:11:28.386988   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:11:28.613799   28158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:11:28.620087   28158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:11:28.620174   28158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:11:28.635690   28158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 17:11:28.635713   28158 start.go:495] detecting cgroup driver to use...
	I0819 17:11:28.635767   28158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:11:28.653193   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:11:28.666341   28158 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:11:28.666408   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:11:28.681324   28158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:11:28.695793   28158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:11:28.821347   28158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:11:28.981851   28158 docker.go:233] disabling docker service ...
	I0819 17:11:28.981909   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:11:28.996004   28158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:11:29.009194   28158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:11:29.135441   28158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:11:29.252378   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:11:29.266336   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:11:29.285515   28158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:11:29.285572   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.295076   28158 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:11:29.295136   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.305191   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.315169   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.324809   28158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:11:29.334804   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.344413   28158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.359937   28158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:11:29.371146   28158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:11:29.381156   28158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 17:11:29.381214   28158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 17:11:29.396311   28158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:11:29.407612   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:11:29.525713   28158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:11:29.666802   28158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:11:29.666870   28158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:11:29.671238   28158 start.go:563] Will wait 60s for crictl version
	I0819 17:11:29.671284   28158 ssh_runner.go:195] Run: which crictl
	I0819 17:11:29.674762   28158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:11:29.714027   28158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:11:29.714110   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:11:29.741537   28158 ssh_runner.go:195] Run: crio --version
	I0819 17:11:29.770866   28158 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:11:29.772404   28158 out.go:177]   - env NO_PROXY=192.168.39.205
	I0819 17:11:29.773657   28158 out.go:177]   - env NO_PROXY=192.168.39.205,192.168.39.189
	I0819 17:11:29.774921   28158 main.go:141] libmachine: (ha-227346-m03) Calling .GetIP
	I0819 17:11:29.777679   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:29.778100   28158 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:11:29.778125   28158 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:11:29.778344   28158 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:11:29.782120   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:11:29.793730   28158 mustload.go:65] Loading cluster: ha-227346
	I0819 17:11:29.793942   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:11:29.794193   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:11:29.794238   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:11:29.810061   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I0819 17:11:29.810397   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:11:29.810856   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:11:29.810877   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:11:29.811174   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:11:29.811356   28158 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:11:29.812979   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:11:29.813359   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:11:29.813397   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:11:29.827628   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0819 17:11:29.827979   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:11:29.828451   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:11:29.828479   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:11:29.828782   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:11:29.828973   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:11:29.829149   28158 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346 for IP: 192.168.39.95
	I0819 17:11:29.829160   28158 certs.go:194] generating shared ca certs ...
	I0819 17:11:29.829173   28158 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:11:29.829296   28158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:11:29.829363   28158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:11:29.829385   28158 certs.go:256] generating profile certs ...
	I0819 17:11:29.829470   28158 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key
	I0819 17:11:29.829498   28158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.a38b4dd0
	I0819 17:11:29.829513   28158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.a38b4dd0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205 192.168.39.189 192.168.39.95 192.168.39.254]
	I0819 17:11:29.904964   28158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.a38b4dd0 ...
	I0819 17:11:29.904995   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.a38b4dd0: {Name:mkd267ee1d478f75426afaa32d391f83a54bf88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:11:29.905167   28158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.a38b4dd0 ...
	I0819 17:11:29.905184   28158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.a38b4dd0: {Name:mkcaafd208354760e3cb5f5e92c19ee041550ac5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:11:29.905274   28158 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.a38b4dd0 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt
	I0819 17:11:29.905427   28158 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.a38b4dd0 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key
	I0819 17:11:29.905578   28158 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key
	I0819 17:11:29.905594   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 17:11:29.905612   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 17:11:29.905629   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 17:11:29.905648   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 17:11:29.905666   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 17:11:29.905683   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 17:11:29.905701   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 17:11:29.905719   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 17:11:29.905790   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:11:29.905831   28158 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:11:29.905844   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:11:29.905881   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:11:29.905913   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:11:29.905944   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:11:29.905997   28158 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:11:29.906033   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /usr/share/ca-certificates/178372.pem
	I0819 17:11:29.906054   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:11:29.906073   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem -> /usr/share/ca-certificates/17837.pem
	I0819 17:11:29.906129   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:11:29.908886   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:11:29.909333   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:11:29.909356   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:11:29.909498   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:11:29.909681   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:11:29.909831   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:11:29.909951   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:11:29.985058   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 17:11:29.989402   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 17:11:29.999229   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 17:11:30.003430   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 17:11:30.014843   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 17:11:30.018683   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 17:11:30.029939   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 17:11:30.033838   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0819 17:11:30.044547   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 17:11:30.049425   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 17:11:30.059299   28158 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 17:11:30.063249   28158 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0819 17:11:30.074985   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:11:30.098380   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:11:30.120042   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:11:30.141832   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:11:30.163415   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0819 17:11:30.185609   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:11:30.206567   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:11:30.227691   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:11:30.248662   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:11:30.270287   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:11:30.292948   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:11:30.314605   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 17:11:30.330217   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 17:11:30.346151   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 17:11:30.361380   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0819 17:11:30.375877   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 17:11:30.391039   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0819 17:11:30.406523   28158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 17:11:30.422898   28158 ssh_runner.go:195] Run: openssl version
	I0819 17:11:30.428071   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:11:30.438558   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:11:30.443023   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:11:30.443069   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:11:30.449050   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:11:30.459207   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:11:30.469213   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:11:30.472943   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:11:30.472983   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:11:30.478039   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:11:30.488052   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:11:30.498253   28158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:11:30.502142   28158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:11:30.502189   28158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:11:30.507250   28158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:11:30.517477   28158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:11:30.521425   28158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:11:30.521479   28158 kubeadm.go:934] updating node {m03 192.168.39.95 8443 v1.31.0 crio true true} ...
	I0819 17:11:30.521567   28158 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-227346-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:11:30.521605   28158 kube-vip.go:115] generating kube-vip config ...
	I0819 17:11:30.521646   28158 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 17:11:30.537160   28158 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 17:11:30.537227   28158 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 17:11:30.537286   28158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:11:30.546975   28158 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0819 17:11:30.547044   28158 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0819 17:11:30.556377   28158 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0819 17:11:30.556407   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 17:11:30.556433   28158 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0819 17:11:30.556452   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 17:11:30.556471   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0819 17:11:30.556383   28158 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0819 17:11:30.556537   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0819 17:11:30.556566   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:11:30.569921   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0819 17:11:30.569960   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0819 17:11:30.569967   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0819 17:11:30.569991   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0819 17:11:30.570000   28158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 17:11:30.570077   28158 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0819 17:11:30.599564   28158 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0819 17:11:30.599624   28158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0819 17:11:31.367847   28158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 17:11:31.377149   28158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 17:11:31.392618   28158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:11:31.408233   28158 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 17:11:31.423050   28158 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 17:11:31.426519   28158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:11:31.437881   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:11:31.560914   28158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:11:31.579361   28158 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:11:31.579688   28158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:11:31.579736   28158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:11:31.595797   28158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0819 17:11:31.596267   28158 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:11:31.596829   28158 main.go:141] libmachine: Using API Version  1
	I0819 17:11:31.596856   28158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:11:31.597154   28158 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:11:31.597337   28158 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:11:31.597464   28158 start.go:317] joinCluster: &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:11:31.597610   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0819 17:11:31.597625   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:11:31.600419   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:11:31.600882   28158 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:11:31.600911   28158 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:11:31.601001   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:11:31.601158   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:11:31.601309   28158 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:11:31.601472   28158 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:11:31.745973   28158 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:11:31.746014   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gklo5r.t543lv6u7mp614yz --discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-227346-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443"
	I0819 17:11:54.732468   28158 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gklo5r.t543lv6u7mp614yz --discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-227346-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443": (22.98643143s)
	I0819 17:11:54.732501   28158 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0819 17:11:55.231779   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-227346-m03 minikube.k8s.io/updated_at=2024_08_19T17_11_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=ha-227346 minikube.k8s.io/primary=false
	I0819 17:11:55.346954   28158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-227346-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0819 17:11:55.468818   28158 start.go:319] duration metric: took 23.871350348s to joinCluster
	I0819 17:11:55.468890   28158 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:11:55.469173   28158 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:11:55.470511   28158 out.go:177] * Verifying Kubernetes components...
	I0819 17:11:55.471891   28158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:11:55.689068   28158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:11:55.717022   28158 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:11:55.717287   28158 kapi.go:59] client config for ha-227346: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.crt", KeyFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key", CAFile:"/home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 17:11:55.717345   28158 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.205:8443
	I0819 17:11:55.717563   28158 node_ready.go:35] waiting up to 6m0s for node "ha-227346-m03" to be "Ready" ...
	I0819 17:11:55.717647   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:55.717657   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:55.717668   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:55.717677   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:55.721697   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:11:56.218102   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:56.218124   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:56.218133   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:56.218137   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:56.221759   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:56.717988   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:56.718011   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:56.718021   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:56.718026   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:56.721656   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:57.217743   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:57.217764   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:57.217775   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:57.217784   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:57.221371   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:57.718297   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:57.718322   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:57.718330   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:57.718333   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:57.722175   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:57.722740   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:11:58.217968   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:58.217990   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:58.217998   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:58.218002   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:58.221606   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:58.718628   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:58.718651   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:58.718659   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:58.718663   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:58.722052   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:11:59.217809   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:59.217830   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:59.217842   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:59.217848   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:59.220798   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:11:59.718523   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:11:59.718545   28158 round_trippers.go:469] Request Headers:
	I0819 17:11:59.718553   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:11:59.718558   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:11:59.721957   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:00.217829   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:00.217849   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:00.217860   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:00.217864   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:00.221107   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:00.221738   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:00.718070   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:00.718092   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:00.718100   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:00.718105   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:00.720812   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:01.218328   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:01.218359   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:01.218372   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:01.218378   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:01.221632   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:01.717989   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:01.718015   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:01.718026   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:01.718032   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:01.721601   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:02.218637   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:02.218662   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:02.218672   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:02.218677   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:02.222088   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:02.222659   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:02.718521   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:02.718547   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:02.718559   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:02.718565   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:02.722975   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:03.217753   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:03.217786   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:03.217797   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:03.217803   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:03.220984   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:03.718016   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:03.718039   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:03.718052   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:03.718058   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:03.721140   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:04.218203   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:04.218227   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:04.218235   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:04.218240   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:04.222558   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:04.223332   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:04.718161   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:04.718186   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:04.718196   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:04.718200   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:04.722420   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:05.218653   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:05.218673   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:05.218681   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:05.218686   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:05.221471   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:05.718390   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:05.718414   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:05.718424   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:05.718427   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:05.722029   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:06.218649   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:06.218668   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:06.218676   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:06.218681   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:06.222025   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:06.718161   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:06.718187   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:06.718196   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:06.718202   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:06.722090   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:06.722871   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:07.217794   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:07.217816   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:07.217824   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:07.217828   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:07.223241   28158 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 17:12:07.718053   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:07.718076   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:07.718086   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:07.718092   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:07.721899   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:08.217854   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:08.217879   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:08.217890   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:08.217896   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:08.221384   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:08.718252   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:08.718285   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:08.718296   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:08.718302   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:08.721661   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:09.218524   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:09.218546   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:09.218554   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:09.218558   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:09.222605   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:09.223217   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:09.718138   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:09.718160   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:09.718169   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:09.718172   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:09.721759   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:10.218645   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:10.218670   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:10.218680   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:10.218685   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:10.222351   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:10.718475   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:10.718502   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:10.718512   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:10.718517   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:10.722308   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:11.218730   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:11.218751   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:11.218759   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:11.218763   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:11.222028   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:11.717962   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:11.717985   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:11.717993   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:11.717998   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:11.721365   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:11.721979   28158 node_ready.go:53] node "ha-227346-m03" has status "Ready":"False"
	I0819 17:12:12.218499   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:12.218528   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:12.218540   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:12.218545   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:12.221993   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:12.717772   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:12.717794   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:12.717802   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:12.717806   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:12.721184   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:13.217722   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:13.217764   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.217772   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.217775   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.221515   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:13.718649   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:13.718677   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.718685   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.718690   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.722013   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:13.722694   28158 node_ready.go:49] node "ha-227346-m03" has status "Ready":"True"
	I0819 17:12:13.722721   28158 node_ready.go:38] duration metric: took 18.005141057s for node "ha-227346-m03" to be "Ready" ...
	I0819 17:12:13.722743   28158 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:12:13.722805   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:12:13.722813   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.722821   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.722825   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.741417   28158 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0819 17:12:13.749956   28158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.750055   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-9s77g
	I0819 17:12:13.750066   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.750077   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.750089   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.759295   28158 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 17:12:13.759942   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:13.759959   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.759968   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.759974   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.764905   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:13.765725   28158 pod_ready.go:93] pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:13.765743   28158 pod_ready.go:82] duration metric: took 15.756145ms for pod "coredns-6f6b679f8f-9s77g" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.765756   28158 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.765816   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-r68td
	I0819 17:12:13.765826   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.765836   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.765843   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.775682   28158 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 17:12:13.776257   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:13.776271   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.776281   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.776288   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.784462   28158 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 17:12:13.785030   28158 pod_ready.go:93] pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:13.785050   28158 pod_ready.go:82] duration metric: took 19.286464ms for pod "coredns-6f6b679f8f-r68td" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.785066   28158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.785127   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346
	I0819 17:12:13.785136   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.785145   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.785151   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.789445   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:13.790074   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:13.790088   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.790098   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.790104   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.794738   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:13.795297   28158 pod_ready.go:93] pod "etcd-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:13.795319   28158 pod_ready.go:82] duration metric: took 10.2455ms for pod "etcd-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.795331   28158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.795393   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346-m02
	I0819 17:12:13.795403   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.795417   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.795424   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.797736   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:13.798295   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:13.798312   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.798319   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.798322   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.800436   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:13.800932   28158 pod_ready.go:93] pod "etcd-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:13.800949   28158 pod_ready.go:82] duration metric: took 5.610847ms for pod "etcd-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.800957   28158 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:13.919385   28158 request.go:632] Waited for 118.367661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346-m03
	I0819 17:12:13.919475   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/etcd-ha-227346-m03
	I0819 17:12:13.919487   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:13.919497   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:13.919507   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:13.924018   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:14.119138   28158 request.go:632] Waited for 194.245348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:14.119192   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:14.119198   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:14.119208   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:14.119213   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:14.122664   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:14.123396   28158 pod_ready.go:93] pod "etcd-ha-227346-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:14.123412   28158 pod_ready.go:82] duration metric: took 322.449239ms for pod "etcd-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:14.123434   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:14.318754   28158 request.go:632] Waited for 195.248967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346
	I0819 17:12:14.318844   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346
	I0819 17:12:14.318855   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:14.318867   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:14.318875   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:14.322565   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:14.519571   28158 request.go:632] Waited for 196.355039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:14.519632   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:14.519637   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:14.519644   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:14.519647   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:14.522797   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:14.523450   28158 pod_ready.go:93] pod "kube-apiserver-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:14.523467   28158 pod_ready.go:82] duration metric: took 400.022092ms for pod "kube-apiserver-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:14.523476   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:14.718826   28158 request.go:632] Waited for 195.289288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m02
	I0819 17:12:14.718894   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m02
	I0819 17:12:14.718899   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:14.718907   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:14.718912   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:14.722295   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:14.919063   28158 request.go:632] Waited for 195.752698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:14.919127   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:14.919134   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:14.919146   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:14.919152   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:14.923184   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:14.923742   28158 pod_ready.go:93] pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:14.923759   28158 pod_ready.go:82] duration metric: took 400.275603ms for pod "kube-apiserver-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:14.923770   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:15.118989   28158 request.go:632] Waited for 195.152436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m03
	I0819 17:12:15.119062   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-227346-m03
	I0819 17:12:15.119069   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:15.119082   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:15.119090   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:15.122088   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:15.319225   28158 request.go:632] Waited for 196.358865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:15.319292   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:15.319302   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:15.319313   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:15.319320   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:15.322339   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:15.323095   28158 pod_ready.go:93] pod "kube-apiserver-ha-227346-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:15.323112   28158 pod_ready.go:82] duration metric: took 399.335876ms for pod "kube-apiserver-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:15.323122   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:15.519334   28158 request.go:632] Waited for 196.150379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346
	I0819 17:12:15.519392   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346
	I0819 17:12:15.519397   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:15.519405   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:15.519409   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:15.522566   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:15.718684   28158 request.go:632] Waited for 195.3553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:15.718769   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:15.718775   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:15.718788   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:15.718793   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:15.722303   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:15.722793   28158 pod_ready.go:93] pod "kube-controller-manager-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:15.722810   28158 pod_ready.go:82] duration metric: took 399.681477ms for pod "kube-controller-manager-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:15.722822   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:15.918907   28158 request.go:632] Waited for 196.018435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m02
	I0819 17:12:15.918992   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m02
	I0819 17:12:15.919015   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:15.919023   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:15.919034   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:15.925867   28158 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 17:12:16.118758   28158 request.go:632] Waited for 192.273548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:16.118822   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:16.118829   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:16.118849   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:16.118873   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:16.122242   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:16.122835   28158 pod_ready.go:93] pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:16.122854   28158 pod_ready.go:82] duration metric: took 400.025629ms for pod "kube-controller-manager-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:16.122865   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:16.319266   28158 request.go:632] Waited for 196.342359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m03
	I0819 17:12:16.319325   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-227346-m03
	I0819 17:12:16.319331   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:16.319341   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:16.319346   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:16.322738   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:16.519500   28158 request.go:632] Waited for 195.729905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:16.519566   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:16.519575   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:16.519585   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:16.519595   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:16.523553   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:16.524208   28158 pod_ready.go:93] pod "kube-controller-manager-ha-227346-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:16.524228   28158 pod_ready.go:82] duration metric: took 401.354941ms for pod "kube-controller-manager-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:16.524238   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6lhlp" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:16.719710   28158 request.go:632] Waited for 195.413497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lhlp
	I0819 17:12:16.719763   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6lhlp
	I0819 17:12:16.719769   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:16.719776   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:16.719781   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:16.723404   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:16.918662   28158 request.go:632] Waited for 194.283424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:16.918753   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:16.918764   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:16.918774   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:16.918778   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:16.923165   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:16.923856   28158 pod_ready.go:93] pod "kube-proxy-6lhlp" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:16.923882   28158 pod_ready.go:82] duration metric: took 399.635573ms for pod "kube-proxy-6lhlp" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:16.923895   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9xpm4" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:17.118925   28158 request.go:632] Waited for 194.967403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xpm4
	I0819 17:12:17.118989   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xpm4
	I0819 17:12:17.118997   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:17.119005   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:17.119010   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:17.122321   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:17.319330   28158 request.go:632] Waited for 196.262827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:17.319425   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:17.319437   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:17.319448   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:17.319457   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:17.323046   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:17.323651   28158 pod_ready.go:93] pod "kube-proxy-9xpm4" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:17.323670   28158 pod_ready.go:82] duration metric: took 399.767781ms for pod "kube-proxy-9xpm4" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:17.323679   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sxvbj" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:17.519746   28158 request.go:632] Waited for 195.98484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxvbj
	I0819 17:12:17.519801   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxvbj
	I0819 17:12:17.519806   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:17.519814   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:17.519818   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:17.523219   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:17.719516   28158 request.go:632] Waited for 195.248597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:17.719582   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:17.719590   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:17.719597   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:17.719601   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:17.723301   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:17.723950   28158 pod_ready.go:93] pod "kube-proxy-sxvbj" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:17.723975   28158 pod_ready.go:82] duration metric: took 400.288816ms for pod "kube-proxy-sxvbj" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:17.723988   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:17.918820   28158 request.go:632] Waited for 194.75048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346
	I0819 17:12:17.918909   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346
	I0819 17:12:17.918926   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:17.918939   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:17.918946   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:17.924269   28158 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 17:12:18.119515   28158 request.go:632] Waited for 194.352171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:18.119570   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346
	I0819 17:12:18.119575   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.119583   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.119598   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.122736   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:18.123500   28158 pod_ready.go:93] pod "kube-scheduler-ha-227346" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:18.123523   28158 pod_ready.go:82] duration metric: took 399.523466ms for pod "kube-scheduler-ha-227346" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:18.123536   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:18.319503   28158 request.go:632] Waited for 195.888785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m02
	I0819 17:12:18.319573   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m02
	I0819 17:12:18.319581   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.319590   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.319596   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.322847   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:18.518991   28158 request.go:632] Waited for 195.347278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:18.519080   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m02
	I0819 17:12:18.519093   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.519105   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.519113   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.522187   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:18.522787   28158 pod_ready.go:93] pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:18.522806   28158 pod_ready.go:82] duration metric: took 399.258763ms for pod "kube-scheduler-ha-227346-m02" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:18.522814   28158 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:18.718903   28158 request.go:632] Waited for 196.006806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m03
	I0819 17:12:18.718958   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-227346-m03
	I0819 17:12:18.718964   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.718973   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.718977   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.722415   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:18.919588   28158 request.go:632] Waited for 196.387669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:18.919641   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes/ha-227346-m03
	I0819 17:12:18.919648   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.919668   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.919688   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.923365   28158 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 17:12:18.923942   28158 pod_ready.go:93] pod "kube-scheduler-ha-227346-m03" in "kube-system" namespace has status "Ready":"True"
	I0819 17:12:18.923969   28158 pod_ready.go:82] duration metric: took 401.146883ms for pod "kube-scheduler-ha-227346-m03" in "kube-system" namespace to be "Ready" ...
	I0819 17:12:18.923984   28158 pod_ready.go:39] duration metric: took 5.201230703s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:12:18.924004   28158 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:12:18.924068   28158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:12:18.942030   28158 api_server.go:72] duration metric: took 23.473102266s to wait for apiserver process to appear ...
	I0819 17:12:18.942060   28158 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:12:18.942081   28158 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0819 17:12:18.946839   28158 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0819 17:12:18.946912   28158 round_trippers.go:463] GET https://192.168.39.205:8443/version
	I0819 17:12:18.946922   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:18.946937   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:18.946951   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:18.948267   28158 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 17:12:18.948441   28158 api_server.go:141] control plane version: v1.31.0
	I0819 17:12:18.948464   28158 api_server.go:131] duration metric: took 6.396635ms to wait for apiserver health ...
	I0819 17:12:18.948473   28158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:12:19.118902   28158 request.go:632] Waited for 170.356227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:12:19.118972   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:12:19.118977   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:19.118985   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:19.118990   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:19.124102   28158 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 17:12:19.130518   28158 system_pods.go:59] 24 kube-system pods found
	I0819 17:12:19.130548   28158 system_pods.go:61] "coredns-6f6b679f8f-9s77g" [28ea7cc3-2a78-4b29-82c7-7a028d357471] Running
	I0819 17:12:19.130555   28158 system_pods.go:61] "coredns-6f6b679f8f-r68td" [e48b2c24-94f7-4ca4-8f99-420706cd0cb3] Running
	I0819 17:12:19.130558   28158 system_pods.go:61] "etcd-ha-227346" [b9aafb60-6b1d-4248-8f98-d01e70c686d5] Running
	I0819 17:12:19.130561   28158 system_pods.go:61] "etcd-ha-227346-m02" [f1063ae2-aa38-45f3-836e-656af135b070] Running
	I0819 17:12:19.130565   28158 system_pods.go:61] "etcd-ha-227346-m03" [fb82b188-0187-4e5c-8829-5f498230f2dd] Running
	I0819 17:12:19.130568   28158 system_pods.go:61] "kindnet-2xfpd" [8ddc9fb1-b06d-43bb-b73e-ea2d505a36ab] Running
	I0819 17:12:19.130571   28158 system_pods.go:61] "kindnet-lwjmd" [55731455-5f1e-4499-ae63-a8ad06f5553f] Running
	I0819 17:12:19.130574   28158 system_pods.go:61] "kindnet-mk55z" [74059a09-c1fc-4d9f-a890-1f8bfa8fff1b] Running
	I0819 17:12:19.130583   28158 system_pods.go:61] "kube-apiserver-ha-227346" [31f54ee5-872e-42cc-88b9-19a4d827370f] Running
	I0819 17:12:19.130592   28158 system_pods.go:61] "kube-apiserver-ha-227346-m02" [d5a4d799-e30b-4614-9d15-4312f9842feb] Running
	I0819 17:12:19.130597   28158 system_pods.go:61] "kube-apiserver-ha-227346-m03" [cbf722b2-fc26-47e0-9f1e-4032d618b101] Running
	I0819 17:12:19.130605   28158 system_pods.go:61] "kube-controller-manager-ha-227346" [a9cc90a6-4c65-4606-8dd0-f38d3910ea72] Running
	I0819 17:12:19.130614   28158 system_pods.go:61] "kube-controller-manager-ha-227346-m02" [ecd218d3-0756-49e0-8f38-d12e877f807b] Running
	I0819 17:12:19.130622   28158 system_pods.go:61] "kube-controller-manager-ha-227346-m03" [4b169608-0121-4f1f-8054-90eb0dd36462] Running
	I0819 17:12:19.130627   28158 system_pods.go:61] "kube-proxy-6lhlp" [59fb9bb4-5dc4-421b-a00f-941b25c16b20] Running
	I0819 17:12:19.130635   28158 system_pods.go:61] "kube-proxy-9xpm4" [56e3f9ad-e32e-4a45-9184-72fd5076b2f7] Running
	I0819 17:12:19.130643   28158 system_pods.go:61] "kube-proxy-sxvbj" [59969a00-8b2e-4dd9-91d7-855f3ae4563e] Running
	I0819 17:12:19.130649   28158 system_pods.go:61] "kube-scheduler-ha-227346" [35d1cd4d-b090-4459-9872-e6f669045e2b] Running
	I0819 17:12:19.130657   28158 system_pods.go:61] "kube-scheduler-ha-227346-m02" [4b6874b9-1d5d-4b3b-8d4b-022ef04c6a3e] Running
	I0819 17:12:19.130662   28158 system_pods.go:61] "kube-scheduler-ha-227346-m03" [aed0cf90-9cff-460f-8f33-e0b6d3dc6fac] Running
	I0819 17:12:19.130670   28158 system_pods.go:61] "kube-vip-ha-227346" [0f27551d-8d73-4f32-8f52-048bb3dfa992] Running
	I0819 17:12:19.130678   28158 system_pods.go:61] "kube-vip-ha-227346-m02" [ecaf3a81-a97d-42a0-b35d-c3c6c84efb21] Running
	I0819 17:12:19.130683   28158 system_pods.go:61] "kube-vip-ha-227346-m03" [e2f0e172-5175-4dde-ba66-3e0238d33afd] Running
	I0819 17:12:19.130690   28158 system_pods.go:61] "storage-provisioner" [f4ed502e-5b16-4a13-9e5f-c1d271bea40b] Running
	I0819 17:12:19.130700   28158 system_pods.go:74] duration metric: took 182.220943ms to wait for pod list to return data ...
	I0819 17:12:19.130712   28158 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:12:19.319364   28158 request.go:632] Waited for 188.573996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/default/serviceaccounts
	I0819 17:12:19.319420   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/default/serviceaccounts
	I0819 17:12:19.319426   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:19.319433   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:19.319436   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:19.322238   28158 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 17:12:19.322352   28158 default_sa.go:45] found service account: "default"
	I0819 17:12:19.322368   28158 default_sa.go:55] duration metric: took 191.648122ms for default service account to be created ...
	I0819 17:12:19.322377   28158 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:12:19.518751   28158 request.go:632] Waited for 196.29873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:12:19.518822   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/namespaces/kube-system/pods
	I0819 17:12:19.518836   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:19.518847   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:19.518854   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:19.524177   28158 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 17:12:19.530349   28158 system_pods.go:86] 24 kube-system pods found
	I0819 17:12:19.530374   28158 system_pods.go:89] "coredns-6f6b679f8f-9s77g" [28ea7cc3-2a78-4b29-82c7-7a028d357471] Running
	I0819 17:12:19.530380   28158 system_pods.go:89] "coredns-6f6b679f8f-r68td" [e48b2c24-94f7-4ca4-8f99-420706cd0cb3] Running
	I0819 17:12:19.530384   28158 system_pods.go:89] "etcd-ha-227346" [b9aafb60-6b1d-4248-8f98-d01e70c686d5] Running
	I0819 17:12:19.530388   28158 system_pods.go:89] "etcd-ha-227346-m02" [f1063ae2-aa38-45f3-836e-656af135b070] Running
	I0819 17:12:19.530391   28158 system_pods.go:89] "etcd-ha-227346-m03" [fb82b188-0187-4e5c-8829-5f498230f2dd] Running
	I0819 17:12:19.530394   28158 system_pods.go:89] "kindnet-2xfpd" [8ddc9fb1-b06d-43bb-b73e-ea2d505a36ab] Running
	I0819 17:12:19.530397   28158 system_pods.go:89] "kindnet-lwjmd" [55731455-5f1e-4499-ae63-a8ad06f5553f] Running
	I0819 17:12:19.530400   28158 system_pods.go:89] "kindnet-mk55z" [74059a09-c1fc-4d9f-a890-1f8bfa8fff1b] Running
	I0819 17:12:19.530404   28158 system_pods.go:89] "kube-apiserver-ha-227346" [31f54ee5-872e-42cc-88b9-19a4d827370f] Running
	I0819 17:12:19.530407   28158 system_pods.go:89] "kube-apiserver-ha-227346-m02" [d5a4d799-e30b-4614-9d15-4312f9842feb] Running
	I0819 17:12:19.530411   28158 system_pods.go:89] "kube-apiserver-ha-227346-m03" [cbf722b2-fc26-47e0-9f1e-4032d618b101] Running
	I0819 17:12:19.530414   28158 system_pods.go:89] "kube-controller-manager-ha-227346" [a9cc90a6-4c65-4606-8dd0-f38d3910ea72] Running
	I0819 17:12:19.530418   28158 system_pods.go:89] "kube-controller-manager-ha-227346-m02" [ecd218d3-0756-49e0-8f38-d12e877f807b] Running
	I0819 17:12:19.530421   28158 system_pods.go:89] "kube-controller-manager-ha-227346-m03" [4b169608-0121-4f1f-8054-90eb0dd36462] Running
	I0819 17:12:19.530427   28158 system_pods.go:89] "kube-proxy-6lhlp" [59fb9bb4-5dc4-421b-a00f-941b25c16b20] Running
	I0819 17:12:19.530430   28158 system_pods.go:89] "kube-proxy-9xpm4" [56e3f9ad-e32e-4a45-9184-72fd5076b2f7] Running
	I0819 17:12:19.530433   28158 system_pods.go:89] "kube-proxy-sxvbj" [59969a00-8b2e-4dd9-91d7-855f3ae4563e] Running
	I0819 17:12:19.530436   28158 system_pods.go:89] "kube-scheduler-ha-227346" [35d1cd4d-b090-4459-9872-e6f669045e2b] Running
	I0819 17:12:19.530439   28158 system_pods.go:89] "kube-scheduler-ha-227346-m02" [4b6874b9-1d5d-4b3b-8d4b-022ef04c6a3e] Running
	I0819 17:12:19.530445   28158 system_pods.go:89] "kube-scheduler-ha-227346-m03" [aed0cf90-9cff-460f-8f33-e0b6d3dc6fac] Running
	I0819 17:12:19.530454   28158 system_pods.go:89] "kube-vip-ha-227346" [0f27551d-8d73-4f32-8f52-048bb3dfa992] Running
	I0819 17:12:19.530458   28158 system_pods.go:89] "kube-vip-ha-227346-m02" [ecaf3a81-a97d-42a0-b35d-c3c6c84efb21] Running
	I0819 17:12:19.530463   28158 system_pods.go:89] "kube-vip-ha-227346-m03" [e2f0e172-5175-4dde-ba66-3e0238d33afd] Running
	I0819 17:12:19.530471   28158 system_pods.go:89] "storage-provisioner" [f4ed502e-5b16-4a13-9e5f-c1d271bea40b] Running
	I0819 17:12:19.530479   28158 system_pods.go:126] duration metric: took 208.094264ms to wait for k8s-apps to be running ...
	I0819 17:12:19.530490   28158 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:12:19.530546   28158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:12:19.547883   28158 system_svc.go:56] duration metric: took 17.386016ms WaitForService to wait for kubelet
	I0819 17:12:19.547914   28158 kubeadm.go:582] duration metric: took 24.078991194s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:12:19.547931   28158 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:12:19.719314   28158 request.go:632] Waited for 171.31193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.205:8443/api/v1/nodes
	I0819 17:12:19.719361   28158 round_trippers.go:463] GET https://192.168.39.205:8443/api/v1/nodes
	I0819 17:12:19.719366   28158 round_trippers.go:469] Request Headers:
	I0819 17:12:19.719376   28158 round_trippers.go:473]     Accept: application/json, */*
	I0819 17:12:19.719380   28158 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 17:12:19.723418   28158 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 17:12:19.724417   28158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:12:19.724445   28158 node_conditions.go:123] node cpu capacity is 2
	I0819 17:12:19.724460   28158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:12:19.724466   28158 node_conditions.go:123] node cpu capacity is 2
	I0819 17:12:19.724473   28158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 17:12:19.724479   28158 node_conditions.go:123] node cpu capacity is 2
	I0819 17:12:19.724486   28158 node_conditions.go:105] duration metric: took 176.55004ms to run NodePressure ...
	I0819 17:12:19.724502   28158 start.go:241] waiting for startup goroutines ...
	I0819 17:12:19.724536   28158 start.go:255] writing updated cluster config ...
	I0819 17:12:19.724873   28158 ssh_runner.go:195] Run: rm -f paused
	I0819 17:12:19.775087   28158 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 17:12:19.777054   28158 out.go:177] * Done! kubectl is now configured to use "ha-227346" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 17:17:02 ha-227346 crio[676]: time="2024-08-19 17:17:02.985598908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087822985573785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b1bf5e3-21e3-42ba-9b76-6f569789f225 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:17:02 ha-227346 crio[676]: time="2024-08-19 17:17:02.986145533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e3c106b-b5d4-4980-9b5e-b79b1c18779f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:02 ha-227346 crio[676]: time="2024-08-19 17:17:02.986575274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e3c106b-b5d4-4980-9b5e-b79b1c18779f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:02 ha-227346 crio[676]: time="2024-08-19 17:17:02.988322967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4,PodSandboxId:d17668585f28306f28db99feece1784de6aa63f06dbcfb4366fb7eec9fdb176f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401609162672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7400c3a3872edf318e75739648aae2611b2546097e60db290746f1fb56754ea8,PodSandboxId:60ebfd22a6daabbec9b4f7ca3492bed16e92aa4d3b564af76efe421db0d628b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724087401600598615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6,PodSandboxId:92d2a303608839cc1109ab91e6e3155adb29d42f091e67491ea5ad49df2e0380,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401549192860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-9
4f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9,PodSandboxId:4c49ea56223c8d049bba8bbcbafc5116d535d1f031f0daa330a9203c93581c40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:C
ONTAINER_RUNNING,CreatedAt:1724087389563269206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd,PodSandboxId:8ca1f3b2cdf2920deef82a979d748a78e211c1517103c86f95077db71ba0d2c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:17240873
85840795635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5eaaf42a1219e6ac1e9000a3ba1082671e92052332baed607868cb6bac05eda,PodSandboxId:4f89b348afb842ede879b3eab75131b82a33f6c30ac932b902c0c3018906c3a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17240873770
42532913,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc9c7f75dbb6c6cee4b15b7cca50da4,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511d8c1a0ec341947fb949dde70c27d300d923559ea17954b9d304b815939453,PodSandboxId:2a4dcc8805294b2b027a86fa7dd117e19c7d329498c1f5c368751aea86a1f8b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724087374591660655,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547,PodSandboxId:0cc361224291f84c537bb02cdf155e50cec0ad5e03d0cbac72100ec67c93cbc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724087374543971049,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed,PodSandboxId:9fe350c701f53099986c6b3688d025330851cb1b25086fc0bf320f002669768c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087374490404389,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577,PodSandboxId:3813d79e090bd79f6c0d0c0726b97f41d07f43b957e066eac7f2fcb9118418b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087374524455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e3c106b-b5d4-4980-9b5e-b79b1c18779f name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.027394172Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc186388-b953-4840-9548-7f632431ac67 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.027482350Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc186388-b953-4840-9548-7f632431ac67 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.028653858Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2105510-9e35-4c10-a49e-8595dd31514b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.029117801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087823029052194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2105510-9e35-4c10-a49e-8595dd31514b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.029662791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3dace8c6-7b76-4b43-a11e-84cde998cb45 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.029719930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3dace8c6-7b76-4b43-a11e-84cde998cb45 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.029938919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4,PodSandboxId:d17668585f28306f28db99feece1784de6aa63f06dbcfb4366fb7eec9fdb176f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401609162672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7400c3a3872edf318e75739648aae2611b2546097e60db290746f1fb56754ea8,PodSandboxId:60ebfd22a6daabbec9b4f7ca3492bed16e92aa4d3b564af76efe421db0d628b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724087401600598615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6,PodSandboxId:92d2a303608839cc1109ab91e6e3155adb29d42f091e67491ea5ad49df2e0380,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401549192860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-9
4f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9,PodSandboxId:4c49ea56223c8d049bba8bbcbafc5116d535d1f031f0daa330a9203c93581c40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:C
ONTAINER_RUNNING,CreatedAt:1724087389563269206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd,PodSandboxId:8ca1f3b2cdf2920deef82a979d748a78e211c1517103c86f95077db71ba0d2c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:17240873
85840795635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5eaaf42a1219e6ac1e9000a3ba1082671e92052332baed607868cb6bac05eda,PodSandboxId:4f89b348afb842ede879b3eab75131b82a33f6c30ac932b902c0c3018906c3a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17240873770
42532913,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc9c7f75dbb6c6cee4b15b7cca50da4,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511d8c1a0ec341947fb949dde70c27d300d923559ea17954b9d304b815939453,PodSandboxId:2a4dcc8805294b2b027a86fa7dd117e19c7d329498c1f5c368751aea86a1f8b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724087374591660655,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547,PodSandboxId:0cc361224291f84c537bb02cdf155e50cec0ad5e03d0cbac72100ec67c93cbc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724087374543971049,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed,PodSandboxId:9fe350c701f53099986c6b3688d025330851cb1b25086fc0bf320f002669768c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087374490404389,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577,PodSandboxId:3813d79e090bd79f6c0d0c0726b97f41d07f43b957e066eac7f2fcb9118418b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087374524455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3dace8c6-7b76-4b43-a11e-84cde998cb45 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.077887318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0c09d01-0e04-4ae3-b09a-2fb25f4f0109 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.077965576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0c09d01-0e04-4ae3-b09a-2fb25f4f0109 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.079046746Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fe14aeb-7489-45db-aff2-19777a7c330b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.079529252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087823079504330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fe14aeb-7489-45db-aff2-19777a7c330b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.080140463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63884ed3-fc81-40c6-bef0-c3374fdc1bd1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.080189208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63884ed3-fc81-40c6-bef0-c3374fdc1bd1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.080408786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4,PodSandboxId:d17668585f28306f28db99feece1784de6aa63f06dbcfb4366fb7eec9fdb176f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401609162672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7400c3a3872edf318e75739648aae2611b2546097e60db290746f1fb56754ea8,PodSandboxId:60ebfd22a6daabbec9b4f7ca3492bed16e92aa4d3b564af76efe421db0d628b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724087401600598615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6,PodSandboxId:92d2a303608839cc1109ab91e6e3155adb29d42f091e67491ea5ad49df2e0380,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401549192860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-9
4f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9,PodSandboxId:4c49ea56223c8d049bba8bbcbafc5116d535d1f031f0daa330a9203c93581c40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:C
ONTAINER_RUNNING,CreatedAt:1724087389563269206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd,PodSandboxId:8ca1f3b2cdf2920deef82a979d748a78e211c1517103c86f95077db71ba0d2c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:17240873
85840795635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5eaaf42a1219e6ac1e9000a3ba1082671e92052332baed607868cb6bac05eda,PodSandboxId:4f89b348afb842ede879b3eab75131b82a33f6c30ac932b902c0c3018906c3a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17240873770
42532913,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc9c7f75dbb6c6cee4b15b7cca50da4,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511d8c1a0ec341947fb949dde70c27d300d923559ea17954b9d304b815939453,PodSandboxId:2a4dcc8805294b2b027a86fa7dd117e19c7d329498c1f5c368751aea86a1f8b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724087374591660655,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547,PodSandboxId:0cc361224291f84c537bb02cdf155e50cec0ad5e03d0cbac72100ec67c93cbc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724087374543971049,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed,PodSandboxId:9fe350c701f53099986c6b3688d025330851cb1b25086fc0bf320f002669768c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087374490404389,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577,PodSandboxId:3813d79e090bd79f6c0d0c0726b97f41d07f43b957e066eac7f2fcb9118418b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087374524455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63884ed3-fc81-40c6-bef0-c3374fdc1bd1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.115867641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02ace1f6-69ff-439e-917f-047db582a928 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.115945457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02ace1f6-69ff-439e-917f-047db582a928 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.116910055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c26b0d07-304a-481e-a200-66a9d471acb9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.117447347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087823117422588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c26b0d07-304a-481e-a200-66a9d471acb9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.118209630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4320a445-d9d6-4ecc-81df-83a3615649f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.118273655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4320a445-d9d6-4ecc-81df-83a3615649f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:17:03 ha-227346 crio[676]: time="2024-08-19 17:17:03.118481175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4,PodSandboxId:d17668585f28306f28db99feece1784de6aa63f06dbcfb4366fb7eec9fdb176f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401609162672,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7400c3a3872edf318e75739648aae2611b2546097e60db290746f1fb56754ea8,PodSandboxId:60ebfd22a6daabbec9b4f7ca3492bed16e92aa4d3b564af76efe421db0d628b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724087401600598615,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6,PodSandboxId:92d2a303608839cc1109ab91e6e3155adb29d42f091e67491ea5ad49df2e0380,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087401549192860,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-9
4f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9,PodSandboxId:4c49ea56223c8d049bba8bbcbafc5116d535d1f031f0daa330a9203c93581c40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:C
ONTAINER_RUNNING,CreatedAt:1724087389563269206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd,PodSandboxId:8ca1f3b2cdf2920deef82a979d748a78e211c1517103c86f95077db71ba0d2c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:17240873
85840795635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5eaaf42a1219e6ac1e9000a3ba1082671e92052332baed607868cb6bac05eda,PodSandboxId:4f89b348afb842ede879b3eab75131b82a33f6c30ac932b902c0c3018906c3a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17240873770
42532913,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc9c7f75dbb6c6cee4b15b7cca50da4,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:511d8c1a0ec341947fb949dde70c27d300d923559ea17954b9d304b815939453,PodSandboxId:2a4dcc8805294b2b027a86fa7dd117e19c7d329498c1f5c368751aea86a1f8b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724087374591660655,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547,PodSandboxId:0cc361224291f84c537bb02cdf155e50cec0ad5e03d0cbac72100ec67c93cbc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724087374543971049,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed,PodSandboxId:9fe350c701f53099986c6b3688d025330851cb1b25086fc0bf320f002669768c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087374490404389,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577,PodSandboxId:3813d79e090bd79f6c0d0c0726b97f41d07f43b957e066eac7f2fcb9118418b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087374524455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4320a445-d9d6-4ecc-81df-83a3615649f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0624a8dba0695       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     7 minutes ago       Running             coredns                   0                   d17668585f283       coredns-6f6b679f8f-9s77g
	7400c3a3872ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     7 minutes ago       Running             storage-provisioner       0                   60ebfd22a6daa       storage-provisioner
	e4e823e549cc3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     7 minutes ago       Running             coredns                   0                   92d2a30360883       coredns-6f6b679f8f-r68td
	59dabea0b2cb1       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b   7 minutes ago       Running             kindnet-cni               0                   4c49ea56223c8       kindnet-lwjmd
	25c817915a7df       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                     7 minutes ago       Running             kube-proxy                0                   8ca1f3b2cdf29       kube-proxy-9xpm4
	b5eaaf42a1219       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f    7 minutes ago       Running             kube-vip                  0                   4f89b348afb84       kube-vip-ha-227346
	511d8c1a0ec34       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                     7 minutes ago       Running             kube-apiserver            0                   2a4dcc8805294       kube-apiserver-ha-227346
	7367ba44817a2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                     7 minutes ago       Running             kube-controller-manager   0                   0cc361224291f       kube-controller-manager-ha-227346
	ded6224ece6e4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                     7 minutes ago       Running             kube-scheduler            0                   3813d79e090bd       kube-scheduler-ha-227346
	c1727fa7d7c9f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                     7 minutes ago       Running             etcd                      0                   9fe350c701f53       etcd-ha-227346
	
	
	==> coredns [0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4] <==
	[INFO] 10.244.2.2:37607 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177878s
	[INFO] 10.244.2.2:42454 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00335028s
	[INFO] 10.244.2.2:49221 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132265s
	[INFO] 10.244.2.2:58999 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151192s
	[INFO] 10.244.1.2:52835 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001787677s
	[INFO] 10.244.1.2:36917 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101601s
	[INFO] 10.244.1.2:56268 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197112s
	[INFO] 10.244.1.2:53208 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001266869s
	[INFO] 10.244.1.2:32844 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072428s
	[INFO] 10.244.1.3:44481 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088917s
	[INFO] 10.244.1.3:46305 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145954s
	[INFO] 10.244.2.2:55212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123615s
	[INFO] 10.244.2.2:34683 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089323s
	[INFO] 10.244.2.2:41746 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156593s
	[INFO] 10.244.1.2:55757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148488s
	[INFO] 10.244.1.2:40727 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010542s
	[INFO] 10.244.1.3:44262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115488s
	[INFO] 10.244.1.3:45504 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123275s
	[INFO] 10.244.2.2:42245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251796s
	[INFO] 10.244.2.2:36792 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000165895s
	[INFO] 10.244.2.2:45239 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156083s
	[INFO] 10.244.1.2:36640 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091031s
	[INFO] 10.244.1.2:39845 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090422s
	[INFO] 10.244.1.3:44584 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131606s
	[INFO] 10.244.1.3:41596 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084019s
	
	
	==> coredns [e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6] <==
	[INFO] 10.244.1.2:47163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230381s
	[INFO] 10.244.1.2:50433 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.003436809s
	[INFO] 10.244.1.2:59195 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000100636s
	[INFO] 10.244.1.2:32814 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001599123s
	[INFO] 10.244.2.2:39529 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195202s
	[INFO] 10.244.2.2:33472 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000242135s
	[INFO] 10.244.1.2:51221 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147815s
	[INFO] 10.244.1.2:43702 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097631s
	[INFO] 10.244.1.2:40951 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142664s
	[INFO] 10.244.1.3:35658 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011583s
	[INFO] 10.244.1.3:54609 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001746971s
	[INFO] 10.244.1.3:38577 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187309s
	[INFO] 10.244.1.3:55629 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001059113s
	[INFO] 10.244.1.3:53767 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013021s
	[INFO] 10.244.1.3:58767 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094503s
	[INFO] 10.244.2.2:44014 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108961s
	[INFO] 10.244.1.2:50869 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144661s
	[INFO] 10.244.1.2:41585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067801s
	[INFO] 10.244.1.3:33644 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235041s
	[INFO] 10.244.1.3:35998 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158822s
	[INFO] 10.244.2.2:49281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113706s
	[INFO] 10.244.1.2:55115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127183s
	[INFO] 10.244.1.2:50067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000143513s
	[INFO] 10.244.1.3:45276 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119251s
	[INFO] 10.244.1.3:34581 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000202685s
	
	
	==> describe nodes <==
	Name:               ha-227346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_09_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:09:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:17:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:15:18 +0000   Mon, 19 Aug 2024 17:09:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:15:18 +0000   Mon, 19 Aug 2024 17:09:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:15:18 +0000   Mon, 19 Aug 2024 17:09:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:15:18 +0000   Mon, 19 Aug 2024 17:10:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    ha-227346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 80471ea49a664581949d80643cd4d82b
	  System UUID:                80471ea4-9a66-4581-949d-80643cd4d82b
	  Boot ID:                    b4e046ad-f0c8-4e0a-a3c8-ccc4927ebc7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-9s77g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m18s
	  kube-system                 coredns-6f6b679f8f-r68td             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m18s
	  kube-system                 etcd-ha-227346                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m22s
	  kube-system                 kindnet-lwjmd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m18s
	  kube-system                 kube-apiserver-ha-227346             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-controller-manager-ha-227346    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-proxy-9xpm4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m18s
	  kube-system                 kube-scheduler-ha-227346             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-vip-ha-227346                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m17s  kube-proxy       
	  Normal  Starting                 7m23s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m22s  kubelet          Node ha-227346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m22s  kubelet          Node ha-227346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m22s  kubelet          Node ha-227346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m19s  node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  NodeReady                7m2s   kubelet          Node ha-227346 status is now: NodeReady
	  Normal  RegisteredNode           6m16s  node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           5m4s   node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	
	
	Name:               ha-227346-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_10_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:10:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:13:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 17:12:41 +0000   Mon, 19 Aug 2024 17:14:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 17:12:41 +0000   Mon, 19 Aug 2024 17:14:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 17:12:41 +0000   Mon, 19 Aug 2024 17:14:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 17:12:41 +0000   Mon, 19 Aug 2024 17:14:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    ha-227346-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 feb788fca1734d35a419eead2319624a
	  System UUID:                feb788fc-a173-4d35-a419-eead2319624a
	  Boot ID:                    7455d09e-c221-4dad-aeae-f6832bcbda8f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dncbb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  default                     busybox-7dff88458-k75xm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 etcd-ha-227346-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-mk55z                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m24s
	  kube-system                 kube-apiserver-ha-227346-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-controller-manager-ha-227346-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-6lhlp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-scheduler-ha-227346-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-227346-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m19s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     6m24s                  cidrAllocator    Node ha-227346-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           6m24s                  node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m24s (x8 over 6m24s)  kubelet          Node ha-227346-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s (x8 over 6m24s)  kubelet          Node ha-227346-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s (x7 over 6m24s)  kubelet          Node ha-227346-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  RegisteredNode           5m4s                   node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  NodeNotReady             2m49s                  node-controller  Node ha-227346-m02 status is now: NodeNotReady
	
	
	Name:               ha-227346-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_11_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:11:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:16:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:12:52 +0000   Mon, 19 Aug 2024 17:11:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:12:52 +0000   Mon, 19 Aug 2024 17:11:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:12:52 +0000   Mon, 19 Aug 2024 17:11:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:12:52 +0000   Mon, 19 Aug 2024 17:12:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-227346-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a013b2ee813e40c8a8d8936e0473daaa
	  System UUID:                a013b2ee-813e-40c8-a8d8-936e0473daaa
	  Boot ID:                    370f4f2f-3248-4a84-a8d1-aff69aaf456c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cvdvs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 etcd-ha-227346-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m10s
	  kube-system                 kindnet-2xfpd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m12s
	  kube-system                 kube-apiserver-ha-227346-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-controller-manager-ha-227346-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-proxy-sxvbj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-scheduler-ha-227346-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-vip-ha-227346-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m7s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     5m12s                  cidrAllocator    Node ha-227346-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m12s (x8 over 5m12s)  kubelet          Node ha-227346-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m12s (x8 over 5m12s)  kubelet          Node ha-227346-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m12s (x7 over 5m12s)  kubelet          Node ha-227346-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	  Normal  RegisteredNode           5m4s                   node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	
	
	Name:               ha-227346-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_12_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:12:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:17:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:13:28 +0000   Mon, 19 Aug 2024 17:12:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:13:28 +0000   Mon, 19 Aug 2024 17:12:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:13:28 +0000   Mon, 19 Aug 2024 17:12:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:13:28 +0000   Mon, 19 Aug 2024 17:13:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-227346-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8069ae3ff9145c9b8ed7bff35cdea96
	  System UUID:                d8069ae3-ff91-45c9-b8ed-7bff35cdea96
	  Boot ID:                    1c56de0c-688b-4d9f-bbf7-32b68d2778a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sctvz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m6s
	  kube-system                 kube-proxy-7ktdr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     4m6s                 cidrAllocator    Node ha-227346-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m6s (x2 over 4m7s)  kubelet          Node ha-227346-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x2 over 4m7s)  kubelet          Node ha-227346-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x2 over 4m7s)  kubelet          Node ha-227346-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal  NodeReady                3m46s                kubelet          Node ha-227346-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug19 17:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050820] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037447] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.694222] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.744763] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +4.535363] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.218849] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.053481] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061538] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.190350] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.134022] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.260627] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +3.698622] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +3.234958] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.058962] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.409298] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.084115] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.075846] kauditd_printk_skb: 23 callbacks suppressed
	[Aug19 17:10] kauditd_printk_skb: 36 callbacks suppressed
	[ +43.945746] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed] <==
	{"level":"warn","ts":"2024-08-19T17:17:03.278360Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.335508Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.364286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.371792Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.376266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.386277Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.392310Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.397666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.400763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.403804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.409427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.414608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.420156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.422921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.425460Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.430388Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.435370Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.435606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.441963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.445261Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.448116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.451967Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.458777Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.464777Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:17:03.494374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:17:03 up 7 min,  0 users,  load average: 0.37, 0.22, 0.11
	Linux ha-227346 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9] <==
	I0819 17:16:30.469580       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:16:40.467639       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:16:40.467732       1 main.go:299] handling current node
	I0819 17:16:40.467767       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:16:40.467779       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:16:40.467968       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0819 17:16:40.467976       1 main.go:322] Node ha-227346-m03 has CIDR [10.244.2.0/24] 
	I0819 17:16:40.468033       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:16:40.468052       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:16:50.463244       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:16:50.463274       1 main.go:299] handling current node
	I0819 17:16:50.463298       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:16:50.463303       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:16:50.463434       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0819 17:16:50.463454       1 main.go:322] Node ha-227346-m03 has CIDR [10.244.2.0/24] 
	I0819 17:16:50.463520       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:16:50.463537       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:17:00.467524       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:17:00.467698       1 main.go:299] handling current node
	I0819 17:17:00.467744       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:17:00.467781       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:17:00.467914       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0819 17:17:00.467939       1 main.go:322] Node ha-227346-m03 has CIDR [10.244.2.0/24] 
	I0819 17:17:00.468029       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:17:00.468048       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [511d8c1a0ec341947fb949dde70c27d300d923559ea17954b9d304b815939453] <==
	I0819 17:09:39.452136       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 17:09:39.607665       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0819 17:09:39.616011       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.205]
	I0819 17:09:39.617696       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:09:39.623790       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 17:09:39.669994       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 17:09:40.855733       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 17:09:40.880785       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0819 17:09:41.007349       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 17:09:45.127522       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0819 17:09:45.324004       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0819 17:12:26.023041       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33338: use of closed network connection
	E0819 17:12:26.208281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33354: use of closed network connection
	E0819 17:12:26.390037       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33368: use of closed network connection
	E0819 17:12:26.564430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33382: use of closed network connection
	E0819 17:12:26.730728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59890: use of closed network connection
	E0819 17:12:26.894819       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59906: use of closed network connection
	E0819 17:12:27.058806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59930: use of closed network connection
	E0819 17:12:27.229813       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59950: use of closed network connection
	E0819 17:12:27.687890       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59982: use of closed network connection
	E0819 17:12:27.849598       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60012: use of closed network connection
	E0819 17:12:28.027989       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60028: use of closed network connection
	E0819 17:12:28.192330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60044: use of closed network connection
	E0819 17:12:28.365199       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60064: use of closed network connection
	E0819 17:12:28.526739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60084: use of closed network connection
	
	
	==> kube-controller-manager [7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547] <==
	I0819 17:12:57.183782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:57.246143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:57.482926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:57.656966       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:59.639619       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-227346-m04"
	I0819 17:12:59.639793       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:59.683717       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:59.845974       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:12:59.901098       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:13:07.478004       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:13:17.753434       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-227346-m04"
	I0819 17:13:17.754345       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:13:17.767937       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:13:19.654976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:13:28.185018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:14:14.680741       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-227346-m04"
	I0819 17:14:14.681323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m02"
	I0819 17:14:14.731114       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m02"
	I0819 17:14:14.858399       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.418068ms"
	I0819 17:14:14.858570       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.195µs"
	I0819 17:14:14.920460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m02"
	I0819 17:14:14.936929       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.317377ms"
	I0819 17:14:14.937151       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="173.378µs"
	I0819 17:14:19.988820       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m02"
	I0819 17:15:18.733334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346"
	
	
	==> kube-proxy [25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:09:46.147178       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:09:46.158672       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.205"]
	E0819 17:09:46.158812       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:09:46.198739       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:09:46.198779       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:09:46.198806       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:09:46.201038       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:09:46.201309       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:09:46.201339       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:09:46.204850       1 config.go:197] "Starting service config controller"
	I0819 17:09:46.204894       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:09:46.204926       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:09:46.204930       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:09:46.206648       1 config.go:326] "Starting node config controller"
	I0819 17:09:46.206677       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:09:46.306379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:09:46.306521       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:09:46.306796       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577] <==
	W0819 17:09:39.042786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 17:09:39.042870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:09:39.061817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:09:39.061879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:09:41.006924       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:11:51.662271       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sxvbj\": pod kube-proxy-sxvbj is already assigned to node \"ha-227346-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sxvbj" node="ha-227346-m03"
	E0819 17:11:51.662435       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sxvbj\": pod kube-proxy-sxvbj is already assigned to node \"ha-227346-m03\"" pod="kube-system/kube-proxy-sxvbj"
	I0819 17:11:51.662497       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sxvbj" node="ha-227346-m03"
	I0819 17:12:20.628625       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="362e7b22-83fb-4748-a048-9ef1f609910d" pod="default/busybox-7dff88458-k75xm" assumedNode="ha-227346-m02" currentNode="ha-227346-m03"
	E0819 17:12:20.632886       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k75xm\": pod busybox-7dff88458-k75xm is already assigned to node \"ha-227346-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-k75xm" node="ha-227346-m03"
	E0819 17:12:20.632974       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 362e7b22-83fb-4748-a048-9ef1f609910d(default/busybox-7dff88458-k75xm) was assumed on ha-227346-m03 but assigned to ha-227346-m02" pod="default/busybox-7dff88458-k75xm"
	E0819 17:12:20.633012       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k75xm\": pod busybox-7dff88458-k75xm is already assigned to node \"ha-227346-m02\"" pod="default/busybox-7dff88458-k75xm"
	I0819 17:12:20.633123       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-k75xm" node="ha-227346-m02"
	E0819 17:12:20.698274       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-c789k\": pod busybox-7dff88458-c789k is already assigned to node \"ha-227346\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-c789k" node="ha-227346"
	E0819 17:12:20.698997       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-c789k\": pod busybox-7dff88458-c789k is already assigned to node \"ha-227346\"" pod="default/busybox-7dff88458-c789k"
	E0819 17:12:57.159264       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sctvz\": pod kindnet-sctvz is already assigned to node \"ha-227346-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sctvz" node="ha-227346-m04"
	E0819 17:12:57.159361       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bbe42f64-8bcd-40dd-8a98-f0ca95e3ade7(kube-system/kindnet-sctvz) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sctvz"
	E0819 17:12:57.159407       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sctvz\": pod kindnet-sctvz is already assigned to node \"ha-227346-m04\"" pod="kube-system/kindnet-sctvz"
	I0819 17:12:57.159455       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sctvz" node="ha-227346-m04"
	E0819 17:12:57.162787       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7ktdr\": pod kube-proxy-7ktdr is already assigned to node \"ha-227346-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7ktdr" node="ha-227346-m04"
	E0819 17:12:57.162854       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7ktdr\": pod kube-proxy-7ktdr is already assigned to node \"ha-227346-m04\"" pod="kube-system/kube-proxy-7ktdr"
	E0819 17:12:57.199546       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pzs6h\": pod kube-proxy-pzs6h is already assigned to node \"ha-227346-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pzs6h" node="ha-227346-m04"
	E0819 17:12:57.199793       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pzs6h\": pod kube-proxy-pzs6h is already assigned to node \"ha-227346-m04\"" pod="kube-system/kube-proxy-pzs6h"
	E0819 17:12:57.200501       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9clnw\": pod kindnet-9clnw is already assigned to node \"ha-227346-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9clnw" node="ha-227346-m04"
	E0819 17:12:57.201139       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9clnw\": pod kindnet-9clnw is already assigned to node \"ha-227346-m04\"" pod="kube-system/kindnet-9clnw"
	
	
	==> kubelet <==
	Aug 19 17:15:41 ha-227346 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:15:41 ha-227346 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:15:41 ha-227346 kubelet[1301]: E0819 17:15:41.095275    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087741094897643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:41 ha-227346 kubelet[1301]: E0819 17:15:41.095322    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087741094897643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:51 ha-227346 kubelet[1301]: E0819 17:15:51.097288    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087751096913876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:15:51 ha-227346 kubelet[1301]: E0819 17:15:51.097659    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087751096913876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:01 ha-227346 kubelet[1301]: E0819 17:16:01.099372    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087761098999936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:01 ha-227346 kubelet[1301]: E0819 17:16:01.099394    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087761098999936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:11 ha-227346 kubelet[1301]: E0819 17:16:11.101189    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087771100820332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:11 ha-227346 kubelet[1301]: E0819 17:16:11.101464    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087771100820332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:21 ha-227346 kubelet[1301]: E0819 17:16:21.103128    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087781102542838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:21 ha-227346 kubelet[1301]: E0819 17:16:21.103448    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087781102542838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:31 ha-227346 kubelet[1301]: E0819 17:16:31.105045    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087791104695974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:31 ha-227346 kubelet[1301]: E0819 17:16:31.105452    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087791104695974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:41 ha-227346 kubelet[1301]: E0819 17:16:41.005554    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:16:41 ha-227346 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:16:41 ha-227346 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:16:41 ha-227346 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:16:41 ha-227346 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:16:41 ha-227346 kubelet[1301]: E0819 17:16:41.107303    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087801106797021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:41 ha-227346 kubelet[1301]: E0819 17:16:41.107429    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087801106797021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:51 ha-227346 kubelet[1301]: E0819 17:16:51.109243    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087811108888918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:16:51 ha-227346 kubelet[1301]: E0819 17:16:51.109267    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087811108888918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:17:01 ha-227346 kubelet[1301]: E0819 17:17:01.110871    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087821110541626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:17:01 ha-227346 kubelet[1301]: E0819 17:17:01.111367    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724087821110541626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-227346 -n ha-227346
helpers_test.go:261: (dbg) Run:  kubectl --context ha-227346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (62.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (297.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-227346 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-227346 -v=7 --alsologtostderr
E0819 17:18:15.961479   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:18:43.663220   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-227346 -v=7 --alsologtostderr: exit status 82 (2m1.737452858s)

                                                
                                                
-- stdout --
	* Stopping node "ha-227346-m04"  ...
	* Stopping node "ha-227346-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:17:04.892037   33936 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:17:04.892288   33936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:17:04.892297   33936 out.go:358] Setting ErrFile to fd 2...
	I0819 17:17:04.892302   33936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:17:04.892492   33936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:17:04.892710   33936 out.go:352] Setting JSON to false
	I0819 17:17:04.892827   33936 mustload.go:65] Loading cluster: ha-227346
	I0819 17:17:04.893194   33936 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:17:04.893288   33936 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:17:04.893463   33936 mustload.go:65] Loading cluster: ha-227346
	I0819 17:17:04.893593   33936 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:17:04.893626   33936 stop.go:39] StopHost: ha-227346-m04
	I0819 17:17:04.893995   33936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:04.894049   33936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:04.909018   33936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38665
	I0819 17:17:04.909456   33936 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:04.909905   33936 main.go:141] libmachine: Using API Version  1
	I0819 17:17:04.909927   33936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:04.910255   33936 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:04.913055   33936 out.go:177] * Stopping node "ha-227346-m04"  ...
	I0819 17:17:04.914453   33936 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 17:17:04.914483   33936 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:17:04.914733   33936 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 17:17:04.914754   33936 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:17:04.917911   33936 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:17:04.918285   33936 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:12:43 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:17:04.918310   33936 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:17:04.918466   33936 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:17:04.918804   33936 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:17:04.918955   33936 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:17:04.919099   33936 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:17:05.004546   33936 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 17:17:05.058666   33936 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 17:17:05.111518   33936 main.go:141] libmachine: Stopping "ha-227346-m04"...
	I0819 17:17:05.111544   33936 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:17:05.112985   33936 main.go:141] libmachine: (ha-227346-m04) Calling .Stop
	I0819 17:17:05.116213   33936 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 0/120
	I0819 17:17:06.174295   33936 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:17:06.175472   33936 main.go:141] libmachine: Machine "ha-227346-m04" was stopped.
	I0819 17:17:06.175486   33936 stop.go:75] duration metric: took 1.26104608s to stop
	I0819 17:17:06.175505   33936 stop.go:39] StopHost: ha-227346-m03
	I0819 17:17:06.175874   33936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:17:06.175923   33936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:17:06.190602   33936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34445
	I0819 17:17:06.190945   33936 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:17:06.191394   33936 main.go:141] libmachine: Using API Version  1
	I0819 17:17:06.191414   33936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:17:06.191712   33936 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:17:06.193842   33936 out.go:177] * Stopping node "ha-227346-m03"  ...
	I0819 17:17:06.195263   33936 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 17:17:06.195290   33936 main.go:141] libmachine: (ha-227346-m03) Calling .DriverName
	I0819 17:17:06.195492   33936 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 17:17:06.195519   33936 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHHostname
	I0819 17:17:06.198359   33936 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:17:06.198834   33936 main.go:141] libmachine: (ha-227346-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:a7:7a", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:11:18 +0000 UTC Type:0 Mac:52:54:00:9c:a7:7a Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-227346-m03 Clientid:01:52:54:00:9c:a7:7a}
	I0819 17:17:06.198866   33936 main.go:141] libmachine: (ha-227346-m03) DBG | domain ha-227346-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:9c:a7:7a in network mk-ha-227346
	I0819 17:17:06.198937   33936 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHPort
	I0819 17:17:06.199105   33936 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHKeyPath
	I0819 17:17:06.199255   33936 main.go:141] libmachine: (ha-227346-m03) Calling .GetSSHUsername
	I0819 17:17:06.199457   33936 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m03/id_rsa Username:docker}
	I0819 17:17:06.278882   33936 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 17:17:06.330887   33936 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 17:17:06.383899   33936 main.go:141] libmachine: Stopping "ha-227346-m03"...
	I0819 17:17:06.383924   33936 main.go:141] libmachine: (ha-227346-m03) Calling .GetState
	I0819 17:17:06.385453   33936 main.go:141] libmachine: (ha-227346-m03) Calling .Stop
	I0819 17:17:06.388444   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 0/120
	I0819 17:17:07.390131   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 1/120
	I0819 17:17:08.391386   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 2/120
	I0819 17:17:09.392860   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 3/120
	I0819 17:17:10.394486   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 4/120
	I0819 17:17:11.396787   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 5/120
	I0819 17:17:12.398201   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 6/120
	I0819 17:17:13.399696   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 7/120
	I0819 17:17:14.401088   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 8/120
	I0819 17:17:15.402595   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 9/120
	I0819 17:17:16.404189   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 10/120
	I0819 17:17:17.405694   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 11/120
	I0819 17:17:18.407198   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 12/120
	I0819 17:17:19.408667   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 13/120
	I0819 17:17:20.410170   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 14/120
	I0819 17:17:21.411967   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 15/120
	I0819 17:17:22.413465   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 16/120
	I0819 17:17:23.414733   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 17/120
	I0819 17:17:24.416329   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 18/120
	I0819 17:17:25.417779   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 19/120
	I0819 17:17:26.420078   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 20/120
	I0819 17:17:27.421620   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 21/120
	I0819 17:17:28.423104   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 22/120
	I0819 17:17:29.424259   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 23/120
	I0819 17:17:30.425692   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 24/120
	I0819 17:17:31.427006   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 25/120
	I0819 17:17:32.429315   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 26/120
	I0819 17:17:33.430834   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 27/120
	I0819 17:17:34.432863   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 28/120
	I0819 17:17:35.434204   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 29/120
	I0819 17:17:36.435663   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 30/120
	I0819 17:17:37.437255   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 31/120
	I0819 17:17:38.438687   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 32/120
	I0819 17:17:39.439850   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 33/120
	I0819 17:17:40.441479   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 34/120
	I0819 17:17:41.443348   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 35/120
	I0819 17:17:42.444802   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 36/120
	I0819 17:17:43.446228   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 37/120
	I0819 17:17:44.447822   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 38/120
	I0819 17:17:45.449625   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 39/120
	I0819 17:17:46.451375   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 40/120
	I0819 17:17:47.452680   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 41/120
	I0819 17:17:48.453948   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 42/120
	I0819 17:17:49.455648   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 43/120
	I0819 17:17:50.457067   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 44/120
	I0819 17:17:51.458944   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 45/120
	I0819 17:17:52.460380   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 46/120
	I0819 17:17:53.461650   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 47/120
	I0819 17:17:54.463050   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 48/120
	I0819 17:17:55.464518   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 49/120
	I0819 17:17:56.466202   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 50/120
	I0819 17:17:57.467851   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 51/120
	I0819 17:17:58.469184   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 52/120
	I0819 17:17:59.470526   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 53/120
	I0819 17:18:00.472148   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 54/120
	I0819 17:18:01.474001   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 55/120
	I0819 17:18:02.475414   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 56/120
	I0819 17:18:03.476980   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 57/120
	I0819 17:18:04.479149   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 58/120
	I0819 17:18:05.480584   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 59/120
	I0819 17:18:06.482279   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 60/120
	I0819 17:18:07.483675   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 61/120
	I0819 17:18:08.485049   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 62/120
	I0819 17:18:09.486303   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 63/120
	I0819 17:18:10.487822   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 64/120
	I0819 17:18:11.489527   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 65/120
	I0819 17:18:12.491624   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 66/120
	I0819 17:18:13.493099   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 67/120
	I0819 17:18:14.495373   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 68/120
	I0819 17:18:15.496657   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 69/120
	I0819 17:18:16.498784   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 70/120
	I0819 17:18:17.500045   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 71/120
	I0819 17:18:18.501682   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 72/120
	I0819 17:18:19.502950   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 73/120
	I0819 17:18:20.504501   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 74/120
	I0819 17:18:21.506213   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 75/120
	I0819 17:18:22.507454   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 76/120
	I0819 17:18:23.509049   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 77/120
	I0819 17:18:24.510336   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 78/120
	I0819 17:18:25.511516   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 79/120
	I0819 17:18:26.513462   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 80/120
	I0819 17:18:27.514753   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 81/120
	I0819 17:18:28.515929   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 82/120
	I0819 17:18:29.517250   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 83/120
	I0819 17:18:30.518716   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 84/120
	I0819 17:18:31.520086   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 85/120
	I0819 17:18:32.521444   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 86/120
	I0819 17:18:33.522582   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 87/120
	I0819 17:18:34.524047   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 88/120
	I0819 17:18:35.525570   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 89/120
	I0819 17:18:36.527319   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 90/120
	I0819 17:18:37.529004   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 91/120
	I0819 17:18:38.531475   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 92/120
	I0819 17:18:39.533102   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 93/120
	I0819 17:18:40.535576   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 94/120
	I0819 17:18:41.537449   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 95/120
	I0819 17:18:42.539390   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 96/120
	I0819 17:18:43.540784   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 97/120
	I0819 17:18:44.542299   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 98/120
	I0819 17:18:45.543733   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 99/120
	I0819 17:18:46.545691   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 100/120
	I0819 17:18:47.547797   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 101/120
	I0819 17:18:48.549183   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 102/120
	I0819 17:18:49.550748   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 103/120
	I0819 17:18:50.552072   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 104/120
	I0819 17:18:51.553771   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 105/120
	I0819 17:18:52.555334   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 106/120
	I0819 17:18:53.557341   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 107/120
	I0819 17:18:54.559514   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 108/120
	I0819 17:18:55.560909   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 109/120
	I0819 17:18:56.563008   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 110/120
	I0819 17:18:57.564743   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 111/120
	I0819 17:18:58.566129   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 112/120
	I0819 17:18:59.567547   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 113/120
	I0819 17:19:00.569166   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 114/120
	I0819 17:19:01.570656   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 115/120
	I0819 17:19:02.572465   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 116/120
	I0819 17:19:03.573876   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 117/120
	I0819 17:19:04.575367   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 118/120
	I0819 17:19:05.576825   33936 main.go:141] libmachine: (ha-227346-m03) Waiting for machine to stop 119/120
	I0819 17:19:06.577382   33936 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 17:19:06.577458   33936 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 17:19:06.579680   33936 out.go:201] 
	W0819 17:19:06.581031   33936 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 17:19:06.581046   33936 out.go:270] * 
	* 
	W0819 17:19:06.583199   33936 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 17:19:06.584898   33936 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-227346 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-227346 --wait=true -v=7 --alsologtostderr
E0819 17:20:21.263050   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:21:44.328982   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-227346 --wait=true -v=7 --alsologtostderr: (2m53.620636891s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-227346
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-227346 -n ha-227346
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-227346 logs -n 25: (1.737496176s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m02:/home/docker/cp-test_ha-227346-m03_ha-227346-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m02 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m03_ha-227346-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04:/home/docker/cp-test_ha-227346-m03_ha-227346-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m04 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m03_ha-227346-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp testdata/cp-test.txt                                               | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile955442382/001/cp-test_ha-227346-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346:/home/docker/cp-test_ha-227346-m04_ha-227346.txt                      |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346 sudo cat                                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346.txt                                |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m02:/home/docker/cp-test_ha-227346-m04_ha-227346-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m02 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03:/home/docker/cp-test_ha-227346-m04_ha-227346-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m03 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-227346 node stop m02 -v=7                                                    | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-227346 node start m02 -v=7                                                   | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:16 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-227346 -v=7                                                          | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-227346 -v=7                                                               | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-227346 --wait=true -v=7                                                   | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:19 UTC | 19 Aug 24 17:22 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-227346                                                               | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:22 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:19:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:19:06.628669   34814 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:19:06.628804   34814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:19:06.628814   34814 out.go:358] Setting ErrFile to fd 2...
	I0819 17:19:06.628820   34814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:19:06.628983   34814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:19:06.629523   34814 out.go:352] Setting JSON to false
	I0819 17:19:06.630426   34814 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3692,"bootTime":1724084255,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:19:06.630480   34814 start.go:139] virtualization: kvm guest
	I0819 17:19:06.632778   34814 out.go:177] * [ha-227346] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:19:06.634106   34814 notify.go:220] Checking for updates...
	I0819 17:19:06.634156   34814 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:19:06.635413   34814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:19:06.636677   34814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:19:06.637830   34814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:19:06.639034   34814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:19:06.640253   34814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:19:06.641914   34814 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:19:06.642038   34814 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:19:06.642487   34814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:19:06.642552   34814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:19:06.658203   34814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0819 17:19:06.658695   34814 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:19:06.659270   34814 main.go:141] libmachine: Using API Version  1
	I0819 17:19:06.659293   34814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:19:06.659608   34814 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:19:06.659764   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:06.695358   34814 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 17:19:06.696788   34814 start.go:297] selected driver: kvm2
	I0819 17:19:06.696815   34814 start.go:901] validating driver "kvm2" against &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.96 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:19:06.696964   34814 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:19:06.697308   34814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:19:06.697385   34814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:19:06.713467   34814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:19:06.714403   34814 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:19:06.714451   34814 cni.go:84] Creating CNI manager for ""
	I0819 17:19:06.714463   34814 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 17:19:06.714533   34814 start.go:340] cluster config:
	{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.96 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:19:06.714719   34814 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:19:06.716696   34814 out.go:177] * Starting "ha-227346" primary control-plane node in "ha-227346" cluster
	I0819 17:19:06.717774   34814 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:19:06.717802   34814 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:19:06.717811   34814 cache.go:56] Caching tarball of preloaded images
	I0819 17:19:06.717893   34814 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:19:06.717903   34814 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:19:06.718012   34814 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:19:06.718204   34814 start.go:360] acquireMachinesLock for ha-227346: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:19:06.718250   34814 start.go:364] duration metric: took 28.188µs to acquireMachinesLock for "ha-227346"
	I0819 17:19:06.718269   34814 start.go:96] Skipping create...Using existing machine configuration
	I0819 17:19:06.718287   34814 fix.go:54] fixHost starting: 
	I0819 17:19:06.718551   34814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:19:06.718580   34814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:19:06.732629   34814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0819 17:19:06.733100   34814 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:19:06.733590   34814 main.go:141] libmachine: Using API Version  1
	I0819 17:19:06.733616   34814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:19:06.733941   34814 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:19:06.734149   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:06.734302   34814 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:19:06.735796   34814 fix.go:112] recreateIfNeeded on ha-227346: state=Running err=<nil>
	W0819 17:19:06.735824   34814 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 17:19:06.737882   34814 out.go:177] * Updating the running kvm2 "ha-227346" VM ...
	I0819 17:19:06.739003   34814 machine.go:93] provisionDockerMachine start ...
	I0819 17:19:06.739030   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:06.739233   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:06.741786   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.742162   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:06.742188   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.742341   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:06.742510   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.742674   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.742817   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:06.742996   34814 main.go:141] libmachine: Using SSH client type: native
	I0819 17:19:06.743230   34814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:19:06.743243   34814 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 17:19:06.849383   34814 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-227346
	
	I0819 17:19:06.849407   34814 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:19:06.849662   34814 buildroot.go:166] provisioning hostname "ha-227346"
	I0819 17:19:06.849685   34814 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:19:06.849856   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:06.852359   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.852792   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:06.852817   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.852969   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:06.853130   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.853288   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.853399   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:06.853572   34814 main.go:141] libmachine: Using SSH client type: native
	I0819 17:19:06.853782   34814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:19:06.853799   34814 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-227346 && echo "ha-227346" | sudo tee /etc/hostname
	I0819 17:19:06.975920   34814 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-227346
	
	I0819 17:19:06.975946   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:06.978445   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.978839   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:06.978866   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.979061   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:06.979269   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.979408   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.979528   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:06.979684   34814 main.go:141] libmachine: Using SSH client type: native
	I0819 17:19:06.979892   34814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:19:06.979916   34814 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-227346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-227346/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-227346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:19:07.090795   34814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:19:07.090829   34814 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:19:07.090848   34814 buildroot.go:174] setting up certificates
	I0819 17:19:07.090858   34814 provision.go:84] configureAuth start
	I0819 17:19:07.090870   34814 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:19:07.091142   34814 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:19:07.093781   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.094254   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:07.094285   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.094428   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:07.096812   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.097232   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:07.097253   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.097399   34814 provision.go:143] copyHostCerts
	I0819 17:19:07.097445   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:19:07.097527   34814 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:19:07.097547   34814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:19:07.097624   34814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:19:07.097752   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:19:07.097779   34814 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:19:07.097788   34814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:19:07.097835   34814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:19:07.097925   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:19:07.097953   34814 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:19:07.097961   34814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:19:07.097998   34814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:19:07.098083   34814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.ha-227346 san=[127.0.0.1 192.168.39.205 ha-227346 localhost minikube]
	I0819 17:19:07.195527   34814 provision.go:177] copyRemoteCerts
	I0819 17:19:07.195604   34814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:19:07.195627   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:07.198284   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.198652   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:07.198682   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.198852   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:07.199095   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:07.199278   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:07.199425   34814 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:19:07.284575   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 17:19:07.284653   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:19:07.310417   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 17:19:07.310504   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 17:19:07.336833   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 17:19:07.336901   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 17:19:07.361454   34814 provision.go:87] duration metric: took 270.584231ms to configureAuth
	I0819 17:19:07.361477   34814 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:19:07.361733   34814 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:19:07.361810   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:07.364415   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.364768   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:07.364805   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.364936   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:07.365108   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:07.365264   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:07.365378   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:07.365508   34814 main.go:141] libmachine: Using SSH client type: native
	I0819 17:19:07.365686   34814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:19:07.365708   34814 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:19:13.020156   34814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:19:13.020178   34814 machine.go:96] duration metric: took 6.281158215s to provisionDockerMachine
	I0819 17:19:13.020189   34814 start.go:293] postStartSetup for "ha-227346" (driver="kvm2")
	I0819 17:19:13.020198   34814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:19:13.020212   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.020567   34814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:19:13.020591   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:13.023566   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.023903   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.023929   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.024088   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:13.024280   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.024457   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:13.024577   34814 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:19:13.150380   34814 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:19:13.157408   34814 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:19:13.157446   34814 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:19:13.157503   34814 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:19:13.157575   34814 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:19:13.157585   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /etc/ssl/certs/178372.pem
	I0819 17:19:13.157660   34814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:19:13.219995   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:19:13.282930   34814 start.go:296] duration metric: took 262.713473ms for postStartSetup
	I0819 17:19:13.282969   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.283284   34814 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 17:19:13.283328   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:13.286488   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.286871   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.286905   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.287225   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:13.287431   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.287618   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:13.287843   34814 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	W0819 17:19:13.482666   34814 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0819 17:19:13.482694   34814 fix.go:56] duration metric: took 6.764414155s for fixHost
	I0819 17:19:13.482716   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:13.485926   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.486337   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.486366   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.486573   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:13.486753   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.486946   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.487095   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:13.487278   34814 main.go:141] libmachine: Using SSH client type: native
	I0819 17:19:13.487531   34814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:19:13.487549   34814 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:19:13.850125   34814 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724087953.810707998
	
	I0819 17:19:13.850155   34814 fix.go:216] guest clock: 1724087953.810707998
	I0819 17:19:13.850165   34814 fix.go:229] Guest: 2024-08-19 17:19:13.810707998 +0000 UTC Remote: 2024-08-19 17:19:13.482702262 +0000 UTC m=+6.888183844 (delta=328.005736ms)
	I0819 17:19:13.850214   34814 fix.go:200] guest clock delta is within tolerance: 328.005736ms
	I0819 17:19:13.850221   34814 start.go:83] releasing machines lock for "ha-227346", held for 7.131959558s
	I0819 17:19:13.850249   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.850502   34814 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:19:13.853336   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.853743   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.853773   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.853940   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.854470   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.854637   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.854751   34814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:19:13.854799   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:13.854829   34814 ssh_runner.go:195] Run: cat /version.json
	I0819 17:19:13.854851   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:13.857052   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.857394   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.857420   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.857440   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.857574   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:13.857774   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.857888   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.857907   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.857910   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:13.858105   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:13.858125   34814 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:19:13.858234   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.858350   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:13.858446   34814 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:19:14.078065   34814 ssh_runner.go:195] Run: systemctl --version
	I0819 17:19:14.104899   34814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:19:14.559594   34814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:19:14.566915   34814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:19:14.566989   34814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:19:14.576686   34814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 17:19:14.576705   34814 start.go:495] detecting cgroup driver to use...
	I0819 17:19:14.576808   34814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:19:14.594018   34814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:19:14.608656   34814 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:19:14.608721   34814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:19:14.623490   34814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:19:14.636786   34814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:19:14.823555   34814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:19:15.034554   34814 docker.go:233] disabling docker service ...
	I0819 17:19:15.034628   34814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:19:15.054320   34814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:19:15.071384   34814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:19:15.265315   34814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:19:15.443241   34814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:19:15.458039   34814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:19:15.490664   34814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:19:15.490744   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.505623   34814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:19:15.505726   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.518043   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.530814   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.546281   34814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:19:15.563592   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.575696   34814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.588454   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.600217   34814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:19:15.611384   34814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:19:15.621080   34814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:19:15.801102   34814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:19:25.505905   34814 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.704764817s)
	I0819 17:19:25.505953   34814 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:19:25.506016   34814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:19:25.510527   34814 start.go:563] Will wait 60s for crictl version
	I0819 17:19:25.510575   34814 ssh_runner.go:195] Run: which crictl
	I0819 17:19:25.514507   34814 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:19:25.550421   34814 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:19:25.550506   34814 ssh_runner.go:195] Run: crio --version
	I0819 17:19:25.578068   34814 ssh_runner.go:195] Run: crio --version
	I0819 17:19:25.607507   34814 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:19:25.608737   34814 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:19:25.611398   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:25.611745   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:25.611775   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:25.611972   34814 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:19:25.616275   34814 kubeadm.go:883] updating cluster {Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.96 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:19:25.616423   34814 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:19:25.616465   34814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:19:25.665452   34814 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:19:25.665475   34814 crio.go:433] Images already preloaded, skipping extraction
	I0819 17:19:25.665539   34814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:19:25.698060   34814 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:19:25.698081   34814 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:19:25.698090   34814 kubeadm.go:934] updating node { 192.168.39.205 8443 v1.31.0 crio true true} ...
	I0819 17:19:25.698197   34814 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-227346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:19:25.698272   34814 ssh_runner.go:195] Run: crio config
	I0819 17:19:25.746476   34814 cni.go:84] Creating CNI manager for ""
	I0819 17:19:25.746502   34814 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 17:19:25.746514   34814 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:19:25.746542   34814 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-227346 NodeName:ha-227346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:19:25.746735   34814 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-227346"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:19:25.746766   34814 kube-vip.go:115] generating kube-vip config ...
	I0819 17:19:25.746810   34814 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 17:19:25.757770   34814 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 17:19:25.757881   34814 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 17:19:25.757941   34814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:19:25.767080   34814 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:19:25.767129   34814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 17:19:25.775863   34814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 17:19:25.791381   34814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:19:25.806261   34814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 17:19:25.821946   34814 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 17:19:25.838039   34814 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 17:19:25.843381   34814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:19:25.982876   34814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:19:25.996514   34814 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346 for IP: 192.168.39.205
	I0819 17:19:25.996564   34814 certs.go:194] generating shared ca certs ...
	I0819 17:19:25.996584   34814 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:19:25.996770   34814 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:19:25.996825   34814 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:19:25.996841   34814 certs.go:256] generating profile certs ...
	I0819 17:19:25.996956   34814 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key
	I0819 17:19:25.996991   34814 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.40fdd3cf
	I0819 17:19:25.997010   34814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.40fdd3cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205 192.168.39.189 192.168.39.95 192.168.39.254]
	I0819 17:19:26.302685   34814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.40fdd3cf ...
	I0819 17:19:26.302721   34814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.40fdd3cf: {Name:mkcad67e542334192c3bbfd9c0d1662abd4a6acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:19:26.302883   34814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.40fdd3cf ...
	I0819 17:19:26.302894   34814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.40fdd3cf: {Name:mk7238b084053b19a8639324314e3f7dc6d64dcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:19:26.302968   34814 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.40fdd3cf -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt
	I0819 17:19:26.303115   34814 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.40fdd3cf -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key
	I0819 17:19:26.303234   34814 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key
	I0819 17:19:26.303254   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 17:19:26.303266   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 17:19:26.303279   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 17:19:26.303292   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 17:19:26.303304   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 17:19:26.303316   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 17:19:26.303332   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 17:19:26.303344   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 17:19:26.303389   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:19:26.303416   34814 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:19:26.303424   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:19:26.303445   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:19:26.303468   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:19:26.303490   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:19:26.303526   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:19:26.303552   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:19:26.303566   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem -> /usr/share/ca-certificates/17837.pem
	I0819 17:19:26.303578   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /usr/share/ca-certificates/178372.pem
	I0819 17:19:26.304092   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:19:26.327916   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:19:26.349621   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:19:26.371172   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:19:26.392952   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 17:19:26.414840   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:19:26.437655   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:19:26.459800   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:19:26.482472   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:19:26.505419   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:19:26.527376   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:19:26.549938   34814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:19:26.565346   34814 ssh_runner.go:195] Run: openssl version
	I0819 17:19:26.570651   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:19:26.580647   34814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:19:26.585302   34814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:19:26.585355   34814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:19:26.590571   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:19:26.600327   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:19:26.610351   34814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:19:26.614594   34814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:19:26.614648   34814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:19:26.620064   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:19:26.629127   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:19:26.639202   34814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:19:26.643275   34814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:19:26.643337   34814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:19:26.648685   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:19:26.657572   34814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:19:26.661674   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 17:19:26.667153   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 17:19:26.672449   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 17:19:26.677889   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 17:19:26.683406   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 17:19:26.688581   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 17:19:26.693750   34814 kubeadm.go:392] StartCluster: {Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.96 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:19:26.693852   34814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:19:26.693888   34814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:19:26.728530   34814 cri.go:89] found id: "2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35"
	I0819 17:19:26.728556   34814 cri.go:89] found id: "9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f"
	I0819 17:19:26.728560   34814 cri.go:89] found id: "0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec"
	I0819 17:19:26.728564   34814 cri.go:89] found id: "b1163229fd0594539dc14e331d7cb09e7e69ac7030bf1399e654134fe2dd9792"
	I0819 17:19:26.728566   34814 cri.go:89] found id: "a909e07d87a29b9d6d81cf334d38e7b1829a3144044d74cb62a473deecdb3ef3"
	I0819 17:19:26.728569   34814 cri.go:89] found id: "2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d"
	I0819 17:19:26.728572   34814 cri.go:89] found id: "7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422"
	I0819 17:19:26.728575   34814 cri.go:89] found id: "681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9"
	I0819 17:19:26.728577   34814 cri.go:89] found id: "cc4afe373b9078bd6d32f8a9d5cda79bc9337d0fb22df3f80f1035725bcce3ac"
	I0819 17:19:26.728581   34814 cri.go:89] found id: "5d57a3c3b41e0c42cbc8e17808dbca8183361c3346f7448a85689ae54d35c28c"
	I0819 17:19:26.728585   34814 cri.go:89] found id: "64b09216d35d7b5d721e84026ab86c730b012b8603b100f6efb159f59ff28390"
	I0819 17:19:26.728588   34814 cri.go:89] found id: "0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4"
	I0819 17:19:26.728592   34814 cri.go:89] found id: "e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6"
	I0819 17:19:26.728594   34814 cri.go:89] found id: "59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9"
	I0819 17:19:26.728598   34814 cri.go:89] found id: "25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd"
	I0819 17:19:26.728601   34814 cri.go:89] found id: "7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547"
	I0819 17:19:26.728603   34814 cri.go:89] found id: "ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577"
	I0819 17:19:26.728607   34814 cri.go:89] found id: "c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed"
	I0819 17:19:26.728610   34814 cri.go:89] found id: ""
	I0819 17:19:26.728645   34814 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.944406720Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe5f26e8-f7b4-4a1a-b9f7-0cbaa6040e0c name=/runtime.v1.RuntimeService/Version
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.945816986Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8232ac6-351c-4cda-aae3-6a2db055973b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.946266816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088120946242716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8232ac6-351c-4cda-aae3-6a2db055973b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.946946142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a6be309-7c29-454c-a799-67ffd583818c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.947008211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a6be309-7c29-454c-a799-67ffd583818c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.947471653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db204c16e84ff5148c09208a263c4e9b416163857d8e0e6a38ca4bd8883cbfb8,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724088019992154134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7697b63732dd2bc3cdd85bac53ba8e9f05cc44d05bb9fa51923140276ad47ed6,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724088015988671019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc00ac73decf51171a36e73e17f220e2d500056daab06c38c1dcf6d67e012c8f,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724088012988791207,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a231cce4062c4cb9d7052c4156d13bbdce7dfef85f558ba652154251aa290199,PodSandboxId:162f86b2ab34c5f2f4ec6f86f0f02431f9709ee637bddf59389157cd732d7fd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087986994836837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8de80bd2e8f1a1c4a3915a97d273a9d77bde5920160f1b11279a793ba3d932,PodSandboxId:f8bbfa8f41732a9352b46470fd2f2ebcb72281106e66c6e6cabf34002a2a68cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087985989558637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-
7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17f7dbe7da34b5bf99e7da879c06b219529ccff7f93c9aa182264eb59f25d74,PodSandboxId:a4d2be5c777dc4a457f7a72876b6277875dd5c726f896d52ff3a831c8f608b7e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724087979296746113,Labe
ls:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88411286fdced6b3ee02688711f6f43,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49db2955c575381cfdfdc12a855fd70c06b7450b4ac4ddd6554400f704ca9264,PodSandboxId:4858d061f29cac5938ae77d70b4b14b922d241f96996abb18366192e66f1f1bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724087969204590289,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d8e85b57f15ba9de3964a060b9fc3d80e63cd200db7ae096c327c72548927f,PodSandboxId:a5dd1c02893ebeb57f72fc76ad7ab9ddcccf772053856130637956b7bc03feef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724087969158762105,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kube
rnetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0b325ce6a57084c4ab883d9d95a489c2a9802412ad408dd356f6aebf08666c,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724087969191161025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserv
er-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a786925478954498e3879504434075afb91e23c1397e4263ab6747753172c032,PodSandboxId:e8f4568d8f119502c15989a2743f55cd54c95aa91f971d368d78acbc86a85983,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087969125294820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdf151e1296e20d68c07590c24fdf6c369d5b2edd5612d0f0041e4650e04f3b,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724087969139226401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359d51dcc978ce64dc588791333352138d33598730fd0c21fd62e54c1e9d833b,PodSandboxId:378bf7054ff3eb3972d32554a9797a7bef7683421aef723318815182174042c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087968892612213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50368df823e9293c1b958812b27fe383e5864cb749b66ac569626a5fa60c4ad4,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724087968843043489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502
e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35,PodSandboxId:de532343318745253aa7be80ee08b09bc84f1f4c8bbb62d49954c0f4e0d17172,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954210933943,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},
Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec,PodSandboxId:230c20f0fcd600367e75d96536069b6e404f65daf893c0dd8effc5a64d47cdc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724087953950200796,Labels:map[string
]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f,PodSandboxId:e034002a0c3bd629f7a1ec28dc607515752c38b3b64200aa674fe1df70fc63b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954009499316,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422,PodSandboxId:f166e32b9510a72fc238edc7f4a4b10477991c274d710fe9ca08ed3092f0790d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724087953673511183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d,PodSandboxId:230a5e56294f7478c3bf2f2659793f258dfb59f2fdda4f2cbb579d62e2684ce5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c
897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724087953674351607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9,PodSandboxId:b49334661e0b83e24b2cf9073bedb43670f4ada00bcceb5c7c34e78ab07d4c6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b63
27e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724087953646377268,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a6be309-7c29-454c-a799-67ffd583818c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.977502676Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d49d63f7-c252-4a12-97e9-503d0350f27a name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.977852977Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a4d2be5c777dc4a457f7a72876b6277875dd5c726f896d52ff3a831c8f608b7e,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-227346,Uid:e88411286fdced6b3ee02688711f6f43,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724087979209907858,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88411286fdced6b3ee02688711f6f43,},Annotations:map[string]string{kubernetes.io/config.hash: e88411286fdced6b3ee02688711f6f43,kubernetes.io/config.seen: 2024-08-19T17:19:25.799842949Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f8bbfa8f41732a9352b46470fd2f2ebcb72281106e66c6e6cabf34002a2a68cd,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-9s77g,Uid:28ea7cc3-2a78-4b29-82c7-7a028d357471,Namespace:kube-system,Attempt:2
,},State:SANDBOX_READY,CreatedAt:1724087968699980290,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T17:10:01.047599651Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4858d061f29cac5938ae77d70b4b14b922d241f96996abb18366192e66f1f1bd,Metadata:&PodSandboxMetadata{Name:kube-proxy-9xpm4,Uid:56e3f9ad-e32e-4a45-9184-72fd5076b2f7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724087968656944589,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]strin
g{kubernetes.io/config.seen: 2024-08-19T17:09:45.207554164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:162f86b2ab34c5f2f4ec6f86f0f02431f9709ee637bddf59389157cd732d7fd4,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-r68td,Uid:e48b2c24-94f7-4ca4-8f99-420706cd0cb3,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724087968607888532,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T17:10:01.039258181Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8f4568d8f119502c15989a2743f55cd54c95aa91f971d368d78acbc86a85983,Metadata:&PodSandboxMetadata{Name:etcd-ha-227346,Uid:b5b77a066592a139540b9afb0badf56c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:17240879685974463
27,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.205:2379,kubernetes.io/config.hash: b5b77a066592a139540b9afb0badf56c,kubernetes.io/config.seen: 2024-08-19T17:09:40.938612267Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5dd1c02893ebeb57f72fc76ad7ab9ddcccf772053856130637956b7bc03feef,Metadata:&PodSandboxMetadata{Name:kindnet-lwjmd,Uid:55731455-5f1e-4499-ae63-a8ad06f5553f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724087968576620025,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,k8s-app
: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T17:09:45.196536778Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-227346,Uid:b52c02ccbd2d84de74b795b4ffd2de47,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724087968570678590,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.205:8443,kubernetes.io/config.hash: b52c02ccbd2d84de74b795b4ffd2de47,kubernetes.io/config.seen: 2024-08-19T17:09:40.938616414Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e300219ded16cd9a00
c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-227346,Uid:0bb7d8f50a5822f2e8d4254badffee6a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724087968560483759,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0bb7d8f50a5822f2e8d4254badffee6a,kubernetes.io/config.seen: 2024-08-19T17:09:40.938617818Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:378bf7054ff3eb3972d32554a9797a7bef7683421aef723318815182174042c6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-227346,Uid:9d16d043cd45d88d7e4b5a95563c9d12,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724087968522375280,Labels:map[string]string{component: kube-scheduler,io.k
ubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9d16d043cd45d88d7e4b5a95563c9d12,kubernetes.io/config.seen: 2024-08-19T17:09:40.938619229Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f4ed502e-5b16-4a13-9e5f-c1d271bea40b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724087968461627039,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration
: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-19T17:10:01.051111682Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d49d63f7-c252-4a12-97e9-503d0350f27a name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.978950029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea401a2f-c437-4a32-bd79-b07e716ee3e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.979110257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea401a2f-c437-4a32-bd79-b07e716ee3e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:00 ha-227346 crio[4086]: time="2024-08-19 17:22:00.979350547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db204c16e84ff5148c09208a263c4e9b416163857d8e0e6a38ca4bd8883cbfb8,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724088019992154134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7697b63732dd2bc3cdd85bac53ba8e9f05cc44d05bb9fa51923140276ad47ed6,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724088015988671019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc00ac73decf51171a36e73e17f220e2d500056daab06c38c1dcf6d67e012c8f,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724088012988791207,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a231cce4062c4cb9d7052c4156d13bbdce7dfef85f558ba652154251aa290199,PodSandboxId:162f86b2ab34c5f2f4ec6f86f0f02431f9709ee637bddf59389157cd732d7fd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087986994836837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8de80bd2e8f1a1c4a3915a97d273a9d77bde5920160f1b11279a793ba3d932,PodSandboxId:f8bbfa8f41732a9352b46470fd2f2ebcb72281106e66c6e6cabf34002a2a68cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087985989558637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-
7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17f7dbe7da34b5bf99e7da879c06b219529ccff7f93c9aa182264eb59f25d74,PodSandboxId:a4d2be5c777dc4a457f7a72876b6277875dd5c726f896d52ff3a831c8f608b7e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724087979296746113,Labe
ls:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88411286fdced6b3ee02688711f6f43,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49db2955c575381cfdfdc12a855fd70c06b7450b4ac4ddd6554400f704ca9264,PodSandboxId:4858d061f29cac5938ae77d70b4b14b922d241f96996abb18366192e66f1f1bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724087969204590289,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d8e85b57f15ba9de3964a060b9fc3d80e63cd200db7ae096c327c72548927f,PodSandboxId:a5dd1c02893ebeb57f72fc76ad7ab9ddcccf772053856130637956b7bc03feef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724087969158762105,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kube
rnetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a786925478954498e3879504434075afb91e23c1397e4263ab6747753172c032,PodSandboxId:e8f4568d8f119502c15989a2743f55cd54c95aa91f971d368d78acbc86a85983,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087969125294820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359d51dcc978ce64dc588791333352138d33598730fd0c21fd62e54c1e9d833b,PodSandboxId:378bf7054ff3eb3972d32554a9797a7bef7683421aef723318815182174042c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087968892612213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea401a2f-c437-4a32-bd79-b07e716ee3e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.003475414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e8db961-e56f-4c4d-8408-e6e91663bcd2 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.003566701Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e8db961-e56f-4c4d-8408-e6e91663bcd2 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.004617480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71b3d202-0582-41d2-9062-eeffc509efb1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.005120977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088121005046214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71b3d202-0582-41d2-9062-eeffc509efb1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.005705713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98c67c23-0469-4bab-b1b7-4ea5b768e5e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.005778223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98c67c23-0469-4bab-b1b7-4ea5b768e5e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.006358661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db204c16e84ff5148c09208a263c4e9b416163857d8e0e6a38ca4bd8883cbfb8,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724088019992154134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7697b63732dd2bc3cdd85bac53ba8e9f05cc44d05bb9fa51923140276ad47ed6,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724088015988671019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc00ac73decf51171a36e73e17f220e2d500056daab06c38c1dcf6d67e012c8f,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724088012988791207,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a231cce4062c4cb9d7052c4156d13bbdce7dfef85f558ba652154251aa290199,PodSandboxId:162f86b2ab34c5f2f4ec6f86f0f02431f9709ee637bddf59389157cd732d7fd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087986994836837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8de80bd2e8f1a1c4a3915a97d273a9d77bde5920160f1b11279a793ba3d932,PodSandboxId:f8bbfa8f41732a9352b46470fd2f2ebcb72281106e66c6e6cabf34002a2a68cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087985989558637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-
7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17f7dbe7da34b5bf99e7da879c06b219529ccff7f93c9aa182264eb59f25d74,PodSandboxId:a4d2be5c777dc4a457f7a72876b6277875dd5c726f896d52ff3a831c8f608b7e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724087979296746113,Labe
ls:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88411286fdced6b3ee02688711f6f43,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49db2955c575381cfdfdc12a855fd70c06b7450b4ac4ddd6554400f704ca9264,PodSandboxId:4858d061f29cac5938ae77d70b4b14b922d241f96996abb18366192e66f1f1bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724087969204590289,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d8e85b57f15ba9de3964a060b9fc3d80e63cd200db7ae096c327c72548927f,PodSandboxId:a5dd1c02893ebeb57f72fc76ad7ab9ddcccf772053856130637956b7bc03feef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724087969158762105,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kube
rnetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0b325ce6a57084c4ab883d9d95a489c2a9802412ad408dd356f6aebf08666c,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724087969191161025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserv
er-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a786925478954498e3879504434075afb91e23c1397e4263ab6747753172c032,PodSandboxId:e8f4568d8f119502c15989a2743f55cd54c95aa91f971d368d78acbc86a85983,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087969125294820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdf151e1296e20d68c07590c24fdf6c369d5b2edd5612d0f0041e4650e04f3b,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724087969139226401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359d51dcc978ce64dc588791333352138d33598730fd0c21fd62e54c1e9d833b,PodSandboxId:378bf7054ff3eb3972d32554a9797a7bef7683421aef723318815182174042c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087968892612213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50368df823e9293c1b958812b27fe383e5864cb749b66ac569626a5fa60c4ad4,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724087968843043489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502
e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35,PodSandboxId:de532343318745253aa7be80ee08b09bc84f1f4c8bbb62d49954c0f4e0d17172,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954210933943,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},
Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec,PodSandboxId:230c20f0fcd600367e75d96536069b6e404f65daf893c0dd8effc5a64d47cdc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724087953950200796,Labels:map[string
]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f,PodSandboxId:e034002a0c3bd629f7a1ec28dc607515752c38b3b64200aa674fe1df70fc63b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954009499316,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422,PodSandboxId:f166e32b9510a72fc238edc7f4a4b10477991c274d710fe9ca08ed3092f0790d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724087953673511183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d,PodSandboxId:230a5e56294f7478c3bf2f2659793f258dfb59f2fdda4f2cbb579d62e2684ce5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c
897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724087953674351607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9,PodSandboxId:b49334661e0b83e24b2cf9073bedb43670f4ada00bcceb5c7c34e78ab07d4c6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b63
27e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724087953646377268,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98c67c23-0469-4bab-b1b7-4ea5b768e5e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.050258390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6547de5-6466-4ae5-a7b5-e0e4858553e8 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.050420417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6547de5-6466-4ae5-a7b5-e0e4858553e8 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.051486281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ffe80e8f-899b-4051-a01a-a9d799bc23d6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.051974809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088121051946866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffe80e8f-899b-4051-a01a-a9d799bc23d6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.052595119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87819c51-0405-49a5-bcdc-186a69713678 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.052668918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87819c51-0405-49a5-bcdc-186a69713678 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:22:01 ha-227346 crio[4086]: time="2024-08-19 17:22:01.053045973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db204c16e84ff5148c09208a263c4e9b416163857d8e0e6a38ca4bd8883cbfb8,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724088019992154134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7697b63732dd2bc3cdd85bac53ba8e9f05cc44d05bb9fa51923140276ad47ed6,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724088015988671019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc00ac73decf51171a36e73e17f220e2d500056daab06c38c1dcf6d67e012c8f,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724088012988791207,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a231cce4062c4cb9d7052c4156d13bbdce7dfef85f558ba652154251aa290199,PodSandboxId:162f86b2ab34c5f2f4ec6f86f0f02431f9709ee637bddf59389157cd732d7fd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087986994836837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8de80bd2e8f1a1c4a3915a97d273a9d77bde5920160f1b11279a793ba3d932,PodSandboxId:f8bbfa8f41732a9352b46470fd2f2ebcb72281106e66c6e6cabf34002a2a68cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087985989558637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-
7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17f7dbe7da34b5bf99e7da879c06b219529ccff7f93c9aa182264eb59f25d74,PodSandboxId:a4d2be5c777dc4a457f7a72876b6277875dd5c726f896d52ff3a831c8f608b7e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724087979296746113,Labe
ls:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88411286fdced6b3ee02688711f6f43,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49db2955c575381cfdfdc12a855fd70c06b7450b4ac4ddd6554400f704ca9264,PodSandboxId:4858d061f29cac5938ae77d70b4b14b922d241f96996abb18366192e66f1f1bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724087969204590289,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d8e85b57f15ba9de3964a060b9fc3d80e63cd200db7ae096c327c72548927f,PodSandboxId:a5dd1c02893ebeb57f72fc76ad7ab9ddcccf772053856130637956b7bc03feef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724087969158762105,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kube
rnetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0b325ce6a57084c4ab883d9d95a489c2a9802412ad408dd356f6aebf08666c,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724087969191161025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserv
er-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a786925478954498e3879504434075afb91e23c1397e4263ab6747753172c032,PodSandboxId:e8f4568d8f119502c15989a2743f55cd54c95aa91f971d368d78acbc86a85983,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087969125294820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdf151e1296e20d68c07590c24fdf6c369d5b2edd5612d0f0041e4650e04f3b,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724087969139226401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359d51dcc978ce64dc588791333352138d33598730fd0c21fd62e54c1e9d833b,PodSandboxId:378bf7054ff3eb3972d32554a9797a7bef7683421aef723318815182174042c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087968892612213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50368df823e9293c1b958812b27fe383e5864cb749b66ac569626a5fa60c4ad4,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724087968843043489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502
e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35,PodSandboxId:de532343318745253aa7be80ee08b09bc84f1f4c8bbb62d49954c0f4e0d17172,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954210933943,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},
Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec,PodSandboxId:230c20f0fcd600367e75d96536069b6e404f65daf893c0dd8effc5a64d47cdc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724087953950200796,Labels:map[string
]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f,PodSandboxId:e034002a0c3bd629f7a1ec28dc607515752c38b3b64200aa674fe1df70fc63b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954009499316,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422,PodSandboxId:f166e32b9510a72fc238edc7f4a4b10477991c274d710fe9ca08ed3092f0790d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724087953673511183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d,PodSandboxId:230a5e56294f7478c3bf2f2659793f258dfb59f2fdda4f2cbb579d62e2684ce5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c
897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724087953674351607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9,PodSandboxId:b49334661e0b83e24b2cf9073bedb43670f4ada00bcceb5c7c34e78ab07d4c6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b63
27e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724087953646377268,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87819c51-0405-49a5-bcdc-186a69713678 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	db204c16e84ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Running             storage-provisioner       3                   4855963e29b47       storage-provisioner
	7697b63732dd2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Running             kube-controller-manager   3                   e300219ded16c       kube-controller-manager-ha-227346
	fc00ac73decf5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Running             kube-apiserver            4                   048a398695ddd       kube-apiserver-ha-227346
	a231cce4062c4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 minutes ago        Running             coredns                   2                   162f86b2ab34c       coredns-6f6b679f8f-r68td
	6a8de80bd2e8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 minutes ago        Running             coredns                   2                   f8bbfa8f41732       coredns-6f6b679f8f-9s77g
	a17f7dbe7da34       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   2 minutes ago        Running             kube-vip                  0                   a4d2be5c777dc       kube-vip-ha-227346
	49db2955c5753       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   2 minutes ago        Running             kube-proxy                2                   4858d061f29ca       kube-proxy-9xpm4
	2e0b325ce6a57       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   2 minutes ago        Exited              kube-apiserver            3                   048a398695ddd       kube-apiserver-ha-227346
	b3d8e85b57f15       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   2 minutes ago        Running             kindnet-cni               2                   a5dd1c02893eb       kindnet-lwjmd
	0bdf151e1296e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   2 minutes ago        Exited              kube-controller-manager   2                   e300219ded16c       kube-controller-manager-ha-227346
	a786925478954       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   2 minutes ago        Running             etcd                      2                   e8f4568d8f119       etcd-ha-227346
	359d51dcc978c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   2 minutes ago        Running             kube-scheduler            2                   378bf7054ff3e       kube-scheduler-ha-227346
	50368df823e92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 minutes ago        Exited              storage-provisioner       2                   4855963e29b47       storage-provisioner
	2fd299c1d9e8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 minutes ago        Exited              coredns                   1                   de53234331874       coredns-6f6b679f8f-9s77g
	9a18b773d6ac1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 minutes ago        Exited              coredns                   1                   e034002a0c3bd       coredns-6f6b679f8f-r68td
	0210787eef3fd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   2 minutes ago        Exited              kindnet-cni               1                   230c20f0fcd60       kindnet-lwjmd
	2c9d9b1537d36       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   2 minutes ago        Exited              kube-scheduler            1                   230a5e56294f7       kube-scheduler-ha-227346
	7cce81e6b7d57       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   2 minutes ago        Exited              kube-proxy                1                   f166e32b9510a       kube-proxy-9xpm4
	681d8ae88a598       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   2 minutes ago        Exited              etcd                      1                   b49334661e0b8       etcd-ha-227346
	
	
	==> coredns [2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49322 - 15954 "HINFO IN 359130325150598962.3044329225708925938. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006159018s
	
	
	==> coredns [6a8de80bd2e8f1a1c4a3915a97d273a9d77bde5920160f1b11279a793ba3d932] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:34893 - 3413 "HINFO IN 2354490025026756339.4743662506029097874. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011007743s
	
	
	==> coredns [a231cce4062c4cb9d7052c4156d13bbdce7dfef85f558ba652154251aa290199] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-227346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_09_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:09:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:21:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:20:27 +0000   Mon, 19 Aug 2024 17:09:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:20:27 +0000   Mon, 19 Aug 2024 17:09:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:20:27 +0000   Mon, 19 Aug 2024 17:09:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:20:27 +0000   Mon, 19 Aug 2024 17:10:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    ha-227346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 80471ea49a664581949d80643cd4d82b
	  System UUID:                80471ea4-9a66-4581-949d-80643cd4d82b
	  Boot ID:                    b4e046ad-f0c8-4e0a-a3c8-ccc4927ebc7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-9s77g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-6f6b679f8f-r68td             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-227346                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-lwjmd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-227346             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-227346    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9xpm4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-227346             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-227346                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 108s  kube-proxy       
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node ha-227346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node ha-227346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     12m   kubelet          Node ha-227346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m   node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  NodeReady                12m   kubelet          Node ha-227346 status is now: NodeReady
	  Normal  RegisteredNode           11m   node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           10m   node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           111s  node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           103s  node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           39s   node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	
	
	Name:               ha-227346-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_10_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:10:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:21:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:20:57 +0000   Mon, 19 Aug 2024 17:20:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:20:57 +0000   Mon, 19 Aug 2024 17:20:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:20:57 +0000   Mon, 19 Aug 2024 17:20:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:20:57 +0000   Mon, 19 Aug 2024 17:20:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    ha-227346-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 feb788fca1734d35a419eead2319624a
	  System UUID:                feb788fc-a173-4d35-a419-eead2319624a
	  Boot ID:                    95b934f2-5cf6-467f-930f-f1c65d975696
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dncbb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  default                     busybox-7dff88458-k75xm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  kube-system                 etcd-ha-227346-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-mk55z                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-227346-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-227346-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-6lhlp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-227346-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-227346-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 101s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-227346-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           11m                    node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  CIDRAssignmentFailed     11m                    cidrAllocator    Node ha-227346-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-227346-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-227346-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                    node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  NodeNotReady             7m47s                  node-controller  Node ha-227346-m02 status is now: NodeNotReady
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node ha-227346-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node ha-227346-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node ha-227346-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           111s                   node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  RegisteredNode           103s                   node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  RegisteredNode           39s                    node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	
	
	Name:               ha-227346-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_11_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:11:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:21:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:21:34 +0000   Mon, 19 Aug 2024 17:21:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:21:34 +0000   Mon, 19 Aug 2024 17:21:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:21:34 +0000   Mon, 19 Aug 2024 17:21:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:21:34 +0000   Mon, 19 Aug 2024 17:21:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-227346-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a013b2ee813e40c8a8d8936e0473daaa
	  System UUID:                a013b2ee-813e-40c8-a8d8-936e0473daaa
	  Boot ID:                    e29e326c-c0d5-4c43-a8d3-0abe8ad1df0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cvdvs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  kube-system                 etcd-ha-227346-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-2xfpd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-227346-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-227346-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-sxvbj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-227346-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-227346-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 41s                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     10m                cidrAllocator    Node ha-227346-m03 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-227346-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-227346-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-227346-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	  Normal   RegisteredNode           111s               node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	  Normal   RegisteredNode           103s               node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	  Normal   NodeNotReady             71s                node-controller  Node ha-227346-m03 status is now: NodeNotReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 58s (x2 over 58s)  kubelet          Node ha-227346-m03 has been rebooted, boot id: e29e326c-c0d5-4c43-a8d3-0abe8ad1df0f
	  Normal   NodeHasSufficientMemory  58s (x3 over 58s)  kubelet          Node ha-227346-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x3 over 58s)  kubelet          Node ha-227346-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x3 over 58s)  kubelet          Node ha-227346-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             58s                kubelet          Node ha-227346-m03 status is now: NodeNotReady
	  Normal   NodeReady                58s                kubelet          Node ha-227346-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-227346-m03 event: Registered Node ha-227346-m03 in Controller
	
	
	Name:               ha-227346-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_12_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:12:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:21:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:21:53 +0000   Mon, 19 Aug 2024 17:21:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:21:53 +0000   Mon, 19 Aug 2024 17:21:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:21:53 +0000   Mon, 19 Aug 2024 17:21:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:21:53 +0000   Mon, 19 Aug 2024 17:21:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-227346-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8069ae3ff9145c9b8ed7bff35cdea96
	  System UUID:                d8069ae3-ff91-45c9-b8ed-7bff35cdea96
	  Boot ID:                    5f021679-6569-4e7d-8eea-422cae4a7c93
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sctvz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m4s
	  kube-system                 kube-proxy-7ktdr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 8m58s                kube-proxy       
	  Normal   Starting                 4s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  9m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m4s                 node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   CIDRAssignmentFailed     9m4s                 cidrAllocator    Node ha-227346-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  9m4s (x2 over 9m5s)  kubelet          Node ha-227346-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m4s (x2 over 9m5s)  kubelet          Node ha-227346-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m4s (x2 over 9m5s)  kubelet          Node ha-227346-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m2s                 node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   RegisteredNode           9m2s                 node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   NodeReady                8m44s                kubelet          Node ha-227346-m04 status is now: NodeReady
	  Normal   RegisteredNode           111s                 node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   RegisteredNode           103s                 node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   NodeNotReady             71s                  node-controller  Node ha-227346-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                  node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   Starting                 8s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                   kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                   kubelet          Node ha-227346-m04 has been rebooted, boot id: 5f021679-6569-4e7d-8eea-422cae4a7c93
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)      kubelet          Node ha-227346-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)      kubelet          Node ha-227346-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)      kubelet          Node ha-227346-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                   kubelet          Node ha-227346-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.218849] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.053481] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061538] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.190350] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.134022] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.260627] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +3.698622] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +3.234958] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.058962] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.409298] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.084115] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.075846] kauditd_printk_skb: 23 callbacks suppressed
	[Aug19 17:10] kauditd_printk_skb: 36 callbacks suppressed
	[ +43.945746] kauditd_printk_skb: 26 callbacks suppressed
	[Aug19 17:19] systemd-fstab-generator[3963]: Ignoring "noauto" option for root device
	[  +0.178314] systemd-fstab-generator[3988]: Ignoring "noauto" option for root device
	[  +0.254704] systemd-fstab-generator[4025]: Ignoring "noauto" option for root device
	[  +0.181116] systemd-fstab-generator[4042]: Ignoring "noauto" option for root device
	[  +0.362837] systemd-fstab-generator[4073]: Ignoring "noauto" option for root device
	[ +10.206975] systemd-fstab-generator[4370]: Ignoring "noauto" option for root device
	[  +0.087740] kauditd_printk_skb: 192 callbacks suppressed
	[ +10.067480] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.033540] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.000584] kauditd_printk_skb: 5 callbacks suppressed
	[Aug19 17:20] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9] <==
	{"level":"info","ts":"2024-08-19T17:19:15.070750Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd"}
	{"level":"info","ts":"2024-08-19T17:19:15.071129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e switched to configuration voters=(7149872703657272509 12889633661048190622)"}
	{"level":"info","ts":"2024-08-19T17:19:15.071211Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e"}
	{"level":"info","ts":"2024-08-19T17:19:15.080762Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:19:15.083555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e switched to configuration voters=(7149872703657272509 12889633661048190622) learners=(11157552390870920589)"}
	{"level":"info","ts":"2024-08-19T17:19:15.083640Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e","added-peer-id":"9ad796c8c4abed8d","added-peer-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2024-08-19T17:19:15.083672Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.083689Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.087419Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-19T17:19:15.087562Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:19:15.087598Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:19:15.087608Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:19:15.091357Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T17:19:15.091583Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b2e12d85c3b1f69e","initial-advertise-peer-urls":["https://192.168.39.205:2380"],"listen-peer-urls":["https://192.168.39.205:2380"],"advertise-client-urls":["https://192.168.39.205:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.205:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T17:19:15.091623Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T17:19:15.091693Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.205:2380"}
	{"level":"info","ts":"2024-08-19T17:19:15.091712Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.205:2380"}
	{"level":"info","ts":"2024-08-19T17:19:15.095791Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.095837Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d","remote-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2024-08-19T17:19:15.099391Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.099432Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.099446Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.099652Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.099856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e switched to configuration voters=(7149872703657272509 11157552390870920589 12889633661048190622)"}
	{"level":"info","ts":"2024-08-19T17:19:15.099909Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e"}
	
	
	==> etcd [a786925478954498e3879504434075afb91e23c1397e4263ab6747753172c032] <==
	{"level":"warn","ts":"2024-08-19T17:20:57.953174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:20:58.026601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b2e12d85c3b1f69e","from":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-19T17:20:59.926158Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.95:2380/version","remote-member-id":"9ad796c8c4abed8d","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:20:59.926207Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9ad796c8c4abed8d","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:00.079109Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9ad796c8c4abed8d","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:00.079143Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9ad796c8c4abed8d","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:03.927532Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.95:2380/version","remote-member-id":"9ad796c8c4abed8d","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:03.927636Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9ad796c8c4abed8d","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:05.079826Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9ad796c8c4abed8d","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:05.080019Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9ad796c8c4abed8d","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:07.928793Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.95:2380/version","remote-member-id":"9ad796c8c4abed8d","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:07.928909Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9ad796c8c4abed8d","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:10.080420Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9ad796c8c4abed8d","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:10.080470Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9ad796c8c4abed8d","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:11.930213Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.95:2380/version","remote-member-id":"9ad796c8c4abed8d","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:11.930310Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9ad796c8c4abed8d","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T17:21:12.032440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.064123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-227346-m03\" ","response":"range_response_count:1 size:5940"}
	{"level":"info","ts":"2024-08-19T17:21:12.032625Z","caller":"traceutil/trace.go:171","msg":"trace[896264271] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-227346-m03; range_end:; response_count:1; response_revision:2411; }","duration":"104.288478ms","start":"2024-08-19T17:21:11.928316Z","end":"2024-08-19T17:21:12.032604Z","steps":["trace[896264271] 'range keys from in-memory index tree'  (duration: 102.996316ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:21:13.612349Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:21:13.623545Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:21:13.623635Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:21:13.645371Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b2e12d85c3b1f69e","to":"9ad796c8c4abed8d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T17:21:13.645505Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:21:13.653359Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b2e12d85c3b1f69e","to":"9ad796c8c4abed8d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T17:21:13.653436Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	
	
	==> kernel <==
	 17:22:01 up 12 min,  0 users,  load average: 0.57, 0.72, 0.38
	Linux ha-227346 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec] <==
	I0819 17:19:14.539758       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0819 17:19:14.540043       1 main.go:139] hostIP = 192.168.39.205
	podIP = 192.168.39.205
	I0819 17:19:14.548236       1 main.go:148] setting mtu 1500 for CNI 
	I0819 17:19:14.548266       1 main.go:178] kindnetd IP family: "ipv4"
	I0819 17:19:14.548282       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0819 17:19:15.164271       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	
	
	==> kindnet [b3d8e85b57f15ba9de3964a060b9fc3d80e63cd200db7ae096c327c72548927f] <==
	I0819 17:21:30.109239       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:21:40.113361       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:21:40.113428       1 main.go:299] handling current node
	I0819 17:21:40.113480       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:21:40.113489       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:21:40.113679       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0819 17:21:40.113710       1 main.go:322] Node ha-227346-m03 has CIDR [10.244.2.0/24] 
	I0819 17:21:40.113784       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:21:40.113800       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:21:50.117286       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:21:50.117546       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:21:50.117787       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0819 17:21:50.117837       1 main.go:322] Node ha-227346-m03 has CIDR [10.244.2.0/24] 
	I0819 17:21:50.117940       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:21:50.117966       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:21:50.118318       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:21:50.118444       1 main.go:299] handling current node
	I0819 17:22:00.111965       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:22:00.112044       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:22:00.112273       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:22:00.112307       1 main.go:299] handling current node
	I0819 17:22:00.112331       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:22:00.112340       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:22:00.112423       1 main.go:295] Handling node with IPs: map[192.168.39.95:{}]
	I0819 17:22:00.112448       1 main.go:322] Node ha-227346-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [2e0b325ce6a57084c4ab883d9d95a489c2a9802412ad408dd356f6aebf08666c] <==
	I0819 17:19:29.645536       1 options.go:228] external host was not specified, using 192.168.39.205
	I0819 17:19:29.647470       1 server.go:142] Version: v1.31.0
	I0819 17:19:29.647548       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:19:30.271705       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 17:19:30.287225       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:19:30.290029       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 17:19:30.290185       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 17:19:30.290482       1 instance.go:232] Using reconciler: lease
	W0819 17:19:50.270484       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 17:19:50.270484       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0819 17:19:50.291206       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [fc00ac73decf51171a36e73e17f220e2d500056daab06c38c1dcf6d67e012c8f] <==
	I0819 17:20:14.596153       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 17:20:14.630125       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:20:14.630161       1 policy_source.go:224] refreshing policies
	I0819 17:20:14.643997       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 17:20:14.683147       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 17:20:14.683344       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 17:20:14.683425       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 17:20:14.683795       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 17:20:14.684309       1 aggregator.go:171] initial CRD sync complete...
	I0819 17:20:14.684807       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 17:20:14.685042       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 17:20:14.685157       1 cache.go:39] Caches are synced for autoregister controller
	I0819 17:20:14.685484       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 17:20:14.685685       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 17:20:14.687419       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 17:20:14.689146       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 17:20:14.692900       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0819 17:20:14.710743       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.189 192.168.39.95]
	I0819 17:20:14.713417       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:20:14.720281       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 17:20:14.724894       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 17:20:14.731579       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 17:20:15.592852       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 17:20:15.946773       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.189 192.168.39.205 192.168.39.95]
	W0819 17:20:26.080386       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.189 192.168.39.205]
	
	
	==> kube-controller-manager [0bdf151e1296e20d68c07590c24fdf6c369d5b2edd5612d0f0041e4650e04f3b] <==
	I0819 17:19:30.095204       1 serving.go:386] Generated self-signed cert in-memory
	I0819 17:19:30.607687       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 17:19:30.607780       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:19:30.609610       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 17:19:30.609816       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 17:19:30.610361       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 17:19:30.610454       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 17:19:51.297230       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.205:8443/healthz\": dial tcp 192.168.39.205:8443: connect: connection refused"
	
	
	==> kube-controller-manager [7697b63732dd2bc3cdd85bac53ba8e9f05cc44d05bb9fa51923140276ad47ed6] <==
	I0819 17:20:41.384696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="115.354µs"
	I0819 17:20:50.047914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:20:50.048169       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m03"
	I0819 17:20:50.079992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m03"
	I0819 17:20:50.083753       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:20:50.100545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.547098ms"
	I0819 17:20:50.102567       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.19µs"
	I0819 17:20:53.717188       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m03"
	I0819 17:20:55.305343       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m03"
	I0819 17:20:57.738238       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m02"
	I0819 17:21:03.655849       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m03"
	I0819 17:21:03.681644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m03"
	I0819 17:21:03.710032       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m03"
	I0819 17:21:03.800232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:21:04.581205       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.877µs"
	I0819 17:21:05.389498       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:21:22.780969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:21:22.863974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.965792ms"
	I0819 17:21:22.864255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.73µs"
	I0819 17:21:22.886894       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:21:34.108175       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m03"
	I0819 17:21:53.546019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:21:53.546264       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-227346-m04"
	I0819 17:21:53.565673       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	I0819 17:21:53.737581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346-m04"
	
	
	==> kube-proxy [49db2955c575381cfdfdc12a855fd70c06b7450b4ac4ddd6554400f704ca9264] <==
	E0819 17:20:12.993558       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-227346\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0819 17:20:12.993614       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0819 17:20:12.993717       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:20:13.071379       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:20:13.071486       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:20:13.071547       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:20:13.076311       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:20:13.076684       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:20:13.076710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:20:13.079675       1 config.go:197] "Starting service config controller"
	I0819 17:20:13.079755       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:20:13.079798       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:20:13.079817       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:20:13.080665       1 config.go:326] "Starting node config controller"
	I0819 17:20:13.080692       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0819 17:20:16.065596       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0819 17:20:16.073304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-227346&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 17:20:16.076340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-227346&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:20:16.073454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 17:20:16.076383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:20:16.073527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 17:20:16.076406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0819 17:20:17.081006       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:20:17.280894       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:20:17.380088       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422] <==
	
	
	==> kube-scheduler [2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d] <==
	
	
	==> kube-scheduler [359d51dcc978ce64dc588791333352138d33598730fd0c21fd62e54c1e9d833b] <==
	W0819 17:20:06.739415       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.205:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:06.739481       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.205:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:06.907036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.205:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:06.907230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.205:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:07.467707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.205:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:07.467778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.205:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:07.557994       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.205:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:07.558217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.205:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:08.168198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.205:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:08.168255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.205:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:08.796608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.205:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:08.796691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.205:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:08.939393       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.205:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:08.939456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.205:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:10.496651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.205:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:10.496787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.205:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:10.526497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.205:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:10.526604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.205:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:10.698428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.205:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:10.698565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.205:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:11.364587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.205:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:11.364717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.205:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:14.633903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:20:14.634026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:20:30.107282       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 17:20:41 ha-227346 kubelet[1301]: E0819 17:20:41.158008    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088041157669248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:20:41 ha-227346 kubelet[1301]: E0819 17:20:41.158038    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088041157669248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:20:48 ha-227346 kubelet[1301]: I0819 17:20:48.976410    1301 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-227346" podUID="0f27551d-8d73-4f32-8f52-048bb3dfa992"
	Aug 19 17:20:48 ha-227346 kubelet[1301]: I0819 17:20:48.997751    1301 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-227346"
	Aug 19 17:20:51 ha-227346 kubelet[1301]: E0819 17:20:51.159892    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088051159518165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:20:51 ha-227346 kubelet[1301]: E0819 17:20:51.159916    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088051159518165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:01 ha-227346 kubelet[1301]: E0819 17:21:01.163821    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088061163187306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:01 ha-227346 kubelet[1301]: E0819 17:21:01.163870    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088061163187306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:11 ha-227346 kubelet[1301]: E0819 17:21:11.169843    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088071167345141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:11 ha-227346 kubelet[1301]: E0819 17:21:11.169888    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088071167345141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:21 ha-227346 kubelet[1301]: E0819 17:21:21.171634    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088081171235324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:21 ha-227346 kubelet[1301]: E0819 17:21:21.172158    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088081171235324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:31 ha-227346 kubelet[1301]: E0819 17:21:31.174509    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088091173870338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:31 ha-227346 kubelet[1301]: E0819 17:21:31.174878    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088091173870338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:41 ha-227346 kubelet[1301]: E0819 17:21:41.006283    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:21:41 ha-227346 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:21:41 ha-227346 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:21:41 ha-227346 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:21:41 ha-227346 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:21:41 ha-227346 kubelet[1301]: E0819 17:21:41.176685    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088101176348492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:41 ha-227346 kubelet[1301]: E0819 17:21:41.176724    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088101176348492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:51 ha-227346 kubelet[1301]: E0819 17:21:51.178795    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088111178507906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:21:51 ha-227346 kubelet[1301]: E0819 17:21:51.178856    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088111178507906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:22:01 ha-227346 kubelet[1301]: E0819 17:22:01.180861    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088121180566106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:22:01 ha-227346 kubelet[1301]: E0819 17:22:01.180901    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088121180566106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 17:22:00.619919   35928 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19478-10654/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-227346 -n ha-227346
helpers_test.go:261: (dbg) Run:  kubectl --context ha-227346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (297.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 stop -v=7 --alsologtostderr
E0819 17:23:15.960971   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 stop -v=7 --alsologtostderr: exit status 82 (2m0.482823151s)

                                                
                                                
-- stdout --
	* Stopping node "ha-227346-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:22:20.385220   36339 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:22:20.385453   36339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:22:20.385461   36339 out.go:358] Setting ErrFile to fd 2...
	I0819 17:22:20.385466   36339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:22:20.385663   36339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:22:20.385879   36339 out.go:352] Setting JSON to false
	I0819 17:22:20.385956   36339 mustload.go:65] Loading cluster: ha-227346
	I0819 17:22:20.386295   36339 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:22:20.386381   36339 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:22:20.386541   36339 mustload.go:65] Loading cluster: ha-227346
	I0819 17:22:20.386676   36339 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:22:20.386698   36339 stop.go:39] StopHost: ha-227346-m04
	I0819 17:22:20.387046   36339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:22:20.387092   36339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:22:20.402896   36339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0819 17:22:20.403349   36339 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:22:20.403907   36339 main.go:141] libmachine: Using API Version  1
	I0819 17:22:20.403926   36339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:22:20.404251   36339 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:22:20.406266   36339 out.go:177] * Stopping node "ha-227346-m04"  ...
	I0819 17:22:20.407597   36339 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 17:22:20.407639   36339 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:22:20.407875   36339 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 17:22:20.407900   36339 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:22:20.410723   36339 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:22:20.411189   36339 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:21:48 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:22:20.411211   36339 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:22:20.411352   36339 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:22:20.411537   36339 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:22:20.411704   36339 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:22:20.411844   36339 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	I0819 17:22:20.500177   36339 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 17:22:20.552250   36339 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 17:22:20.604845   36339 main.go:141] libmachine: Stopping "ha-227346-m04"...
	I0819 17:22:20.604876   36339 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:22:20.606488   36339 main.go:141] libmachine: (ha-227346-m04) Calling .Stop
	I0819 17:22:20.609902   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 0/120
	I0819 17:22:21.611389   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 1/120
	I0819 17:22:22.612642   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 2/120
	I0819 17:22:23.614900   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 3/120
	I0819 17:22:24.616341   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 4/120
	I0819 17:22:25.618452   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 5/120
	I0819 17:22:26.620288   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 6/120
	I0819 17:22:27.621744   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 7/120
	I0819 17:22:28.623446   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 8/120
	I0819 17:22:29.624652   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 9/120
	I0819 17:22:30.626702   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 10/120
	I0819 17:22:31.629340   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 11/120
	I0819 17:22:32.630724   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 12/120
	I0819 17:22:33.631979   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 13/120
	I0819 17:22:34.633280   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 14/120
	I0819 17:22:35.635263   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 15/120
	I0819 17:22:36.637017   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 16/120
	I0819 17:22:37.639591   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 17/120
	I0819 17:22:38.641280   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 18/120
	I0819 17:22:39.642502   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 19/120
	I0819 17:22:40.644168   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 20/120
	I0819 17:22:41.646449   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 21/120
	I0819 17:22:42.647916   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 22/120
	I0819 17:22:43.649293   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 23/120
	I0819 17:22:44.651366   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 24/120
	I0819 17:22:45.653373   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 25/120
	I0819 17:22:46.655315   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 26/120
	I0819 17:22:47.656994   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 27/120
	I0819 17:22:48.659245   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 28/120
	I0819 17:22:49.661420   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 29/120
	I0819 17:22:50.663601   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 30/120
	I0819 17:22:51.665040   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 31/120
	I0819 17:22:52.667150   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 32/120
	I0819 17:22:53.668625   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 33/120
	I0819 17:22:54.669802   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 34/120
	I0819 17:22:55.672055   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 35/120
	I0819 17:22:56.673318   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 36/120
	I0819 17:22:57.675359   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 37/120
	I0819 17:22:58.676551   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 38/120
	I0819 17:22:59.678253   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 39/120
	I0819 17:23:00.680596   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 40/120
	I0819 17:23:01.682105   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 41/120
	I0819 17:23:02.683419   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 42/120
	I0819 17:23:03.685541   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 43/120
	I0819 17:23:04.687491   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 44/120
	I0819 17:23:05.688815   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 45/120
	I0819 17:23:06.690471   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 46/120
	I0819 17:23:07.691764   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 47/120
	I0819 17:23:08.693394   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 48/120
	I0819 17:23:09.694915   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 49/120
	I0819 17:23:10.697225   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 50/120
	I0819 17:23:11.698772   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 51/120
	I0819 17:23:12.700471   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 52/120
	I0819 17:23:13.702317   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 53/120
	I0819 17:23:14.704370   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 54/120
	I0819 17:23:15.706455   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 55/120
	I0819 17:23:16.707819   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 56/120
	I0819 17:23:17.709470   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 57/120
	I0819 17:23:18.711373   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 58/120
	I0819 17:23:19.713064   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 59/120
	I0819 17:23:20.715242   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 60/120
	I0819 17:23:21.716732   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 61/120
	I0819 17:23:22.718208   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 62/120
	I0819 17:23:23.720464   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 63/120
	I0819 17:23:24.722021   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 64/120
	I0819 17:23:25.723723   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 65/120
	I0819 17:23:26.725172   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 66/120
	I0819 17:23:27.727324   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 67/120
	I0819 17:23:28.729124   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 68/120
	I0819 17:23:29.730438   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 69/120
	I0819 17:23:30.732506   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 70/120
	I0819 17:23:31.733847   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 71/120
	I0819 17:23:32.735176   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 72/120
	I0819 17:23:33.736857   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 73/120
	I0819 17:23:34.738186   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 74/120
	I0819 17:23:35.740292   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 75/120
	I0819 17:23:36.741627   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 76/120
	I0819 17:23:37.743426   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 77/120
	I0819 17:23:38.744919   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 78/120
	I0819 17:23:39.746448   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 79/120
	I0819 17:23:40.748555   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 80/120
	I0819 17:23:41.750177   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 81/120
	I0819 17:23:42.751484   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 82/120
	I0819 17:23:43.753000   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 83/120
	I0819 17:23:44.754417   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 84/120
	I0819 17:23:45.756549   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 85/120
	I0819 17:23:46.758793   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 86/120
	I0819 17:23:47.760201   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 87/120
	I0819 17:23:48.761777   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 88/120
	I0819 17:23:49.763211   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 89/120
	I0819 17:23:50.765645   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 90/120
	I0819 17:23:51.767428   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 91/120
	I0819 17:23:52.768503   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 92/120
	I0819 17:23:53.770112   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 93/120
	I0819 17:23:54.772289   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 94/120
	I0819 17:23:55.774599   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 95/120
	I0819 17:23:56.775997   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 96/120
	I0819 17:23:57.777668   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 97/120
	I0819 17:23:58.779186   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 98/120
	I0819 17:23:59.780763   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 99/120
	I0819 17:24:00.782354   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 100/120
	I0819 17:24:01.783750   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 101/120
	I0819 17:24:02.785174   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 102/120
	I0819 17:24:03.787272   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 103/120
	I0819 17:24:04.788830   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 104/120
	I0819 17:24:05.790844   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 105/120
	I0819 17:24:06.792230   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 106/120
	I0819 17:24:07.793971   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 107/120
	I0819 17:24:08.795805   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 108/120
	I0819 17:24:09.797296   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 109/120
	I0819 17:24:10.798814   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 110/120
	I0819 17:24:11.800426   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 111/120
	I0819 17:24:12.801777   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 112/120
	I0819 17:24:13.804304   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 113/120
	I0819 17:24:14.805755   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 114/120
	I0819 17:24:15.807749   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 115/120
	I0819 17:24:16.809052   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 116/120
	I0819 17:24:17.811309   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 117/120
	I0819 17:24:18.813550   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 118/120
	I0819 17:24:19.815035   36339 main.go:141] libmachine: (ha-227346-m04) Waiting for machine to stop 119/120
	I0819 17:24:20.816253   36339 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 17:24:20.816308   36339 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 17:24:20.818419   36339 out.go:201] 
	W0819 17:24:20.819649   36339 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 17:24:20.819680   36339 out.go:270] * 
	* 
	W0819 17:24:20.822177   36339 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 17:24:20.823512   36339 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-227346 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr: exit status 3 (19.028730165s)

                                                
                                                
-- stdout --
	ha-227346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-227346-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:24:20.871912   36786 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:24:20.872022   36786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:24:20.872032   36786 out.go:358] Setting ErrFile to fd 2...
	I0819 17:24:20.872037   36786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:24:20.872205   36786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:24:20.872384   36786 out.go:352] Setting JSON to false
	I0819 17:24:20.872407   36786 mustload.go:65] Loading cluster: ha-227346
	I0819 17:24:20.872517   36786 notify.go:220] Checking for updates...
	I0819 17:24:20.872779   36786 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:24:20.872796   36786 status.go:255] checking status of ha-227346 ...
	I0819 17:24:20.873220   36786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:24:20.873288   36786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:24:20.898053   36786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
	I0819 17:24:20.898443   36786 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:24:20.899019   36786 main.go:141] libmachine: Using API Version  1
	I0819 17:24:20.899041   36786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:24:20.899384   36786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:24:20.899585   36786 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:24:20.901379   36786 status.go:330] ha-227346 host status = "Running" (err=<nil>)
	I0819 17:24:20.901401   36786 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:24:20.901774   36786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:24:20.901808   36786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:24:20.916721   36786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0819 17:24:20.917103   36786 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:24:20.917590   36786 main.go:141] libmachine: Using API Version  1
	I0819 17:24:20.917606   36786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:24:20.917911   36786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:24:20.918106   36786 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:24:20.920435   36786 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:24:20.920909   36786 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:24:20.920941   36786 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:24:20.921097   36786 host.go:66] Checking if "ha-227346" exists ...
	I0819 17:24:20.921387   36786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:24:20.921446   36786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:24:20.935865   36786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
	I0819 17:24:20.936206   36786 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:24:20.936602   36786 main.go:141] libmachine: Using API Version  1
	I0819 17:24:20.936622   36786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:24:20.936979   36786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:24:20.937173   36786 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:24:20.937349   36786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:24:20.937380   36786 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:24:20.939895   36786 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:24:20.940254   36786 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:24:20.940281   36786 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:24:20.940512   36786 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:24:20.940689   36786 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:24:20.940825   36786 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:24:20.941114   36786 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:24:21.023791   36786 ssh_runner.go:195] Run: systemctl --version
	I0819 17:24:21.030131   36786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:24:21.044955   36786 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:24:21.044983   36786 api_server.go:166] Checking apiserver status ...
	I0819 17:24:21.045021   36786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:24:21.060182   36786 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5486/cgroup
	W0819 17:24:21.069596   36786 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5486/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:24:21.069644   36786 ssh_runner.go:195] Run: ls
	I0819 17:24:21.074116   36786 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:24:21.078577   36786 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:24:21.078605   36786 status.go:422] ha-227346 apiserver status = Running (err=<nil>)
	I0819 17:24:21.078620   36786 status.go:257] ha-227346 status: &{Name:ha-227346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:24:21.078641   36786 status.go:255] checking status of ha-227346-m02 ...
	I0819 17:24:21.079061   36786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:24:21.079106   36786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:24:21.093685   36786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33579
	I0819 17:24:21.094090   36786 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:24:21.094534   36786 main.go:141] libmachine: Using API Version  1
	I0819 17:24:21.094556   36786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:24:21.094861   36786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:24:21.095086   36786 main.go:141] libmachine: (ha-227346-m02) Calling .GetState
	I0819 17:24:21.096849   36786 status.go:330] ha-227346-m02 host status = "Running" (err=<nil>)
	I0819 17:24:21.096864   36786 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:24:21.097131   36786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:24:21.097166   36786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:24:21.111896   36786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I0819 17:24:21.112273   36786 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:24:21.112763   36786 main.go:141] libmachine: Using API Version  1
	I0819 17:24:21.112789   36786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:24:21.113112   36786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:24:21.113323   36786 main.go:141] libmachine: (ha-227346-m02) Calling .GetIP
	I0819 17:24:21.116343   36786 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:24:21.116795   36786 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:19:37 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:24:21.116824   36786 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:24:21.116992   36786 host.go:66] Checking if "ha-227346-m02" exists ...
	I0819 17:24:21.117335   36786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:24:21.117368   36786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:24:21.131785   36786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I0819 17:24:21.132125   36786 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:24:21.132528   36786 main.go:141] libmachine: Using API Version  1
	I0819 17:24:21.132571   36786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:24:21.132923   36786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:24:21.133128   36786 main.go:141] libmachine: (ha-227346-m02) Calling .DriverName
	I0819 17:24:21.133318   36786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:24:21.133344   36786 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHHostname
	I0819 17:24:21.136133   36786 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:24:21.136572   36786 main.go:141] libmachine: (ha-227346-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:ca:df", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:19:37 +0000 UTC Type:0 Mac:52:54:00:50:ca:df Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-227346-m02 Clientid:01:52:54:00:50:ca:df}
	I0819 17:24:21.136607   36786 main.go:141] libmachine: (ha-227346-m02) DBG | domain ha-227346-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:50:ca:df in network mk-ha-227346
	I0819 17:24:21.136716   36786 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHPort
	I0819 17:24:21.136889   36786 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHKeyPath
	I0819 17:24:21.137025   36786 main.go:141] libmachine: (ha-227346-m02) Calling .GetSSHUsername
	I0819 17:24:21.137149   36786 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m02/id_rsa Username:docker}
	I0819 17:24:21.225284   36786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:24:21.242078   36786 kubeconfig.go:125] found "ha-227346" server: "https://192.168.39.254:8443"
	I0819 17:24:21.242106   36786 api_server.go:166] Checking apiserver status ...
	I0819 17:24:21.242145   36786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:24:21.257008   36786 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W0819 17:24:21.266375   36786 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:24:21.266429   36786 ssh_runner.go:195] Run: ls
	I0819 17:24:21.271018   36786 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0819 17:24:21.276132   36786 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0819 17:24:21.276155   36786 status.go:422] ha-227346-m02 apiserver status = Running (err=<nil>)
	I0819 17:24:21.276163   36786 status.go:257] ha-227346-m02 status: &{Name:ha-227346-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:24:21.276177   36786 status.go:255] checking status of ha-227346-m04 ...
	I0819 17:24:21.276522   36786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:24:21.276576   36786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:24:21.291798   36786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0819 17:24:21.292230   36786 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:24:21.292768   36786 main.go:141] libmachine: Using API Version  1
	I0819 17:24:21.292787   36786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:24:21.293124   36786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:24:21.293367   36786 main.go:141] libmachine: (ha-227346-m04) Calling .GetState
	I0819 17:24:21.295209   36786 status.go:330] ha-227346-m04 host status = "Running" (err=<nil>)
	I0819 17:24:21.295228   36786 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:24:21.295565   36786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:24:21.295603   36786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:24:21.310238   36786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46465
	I0819 17:24:21.310741   36786 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:24:21.311257   36786 main.go:141] libmachine: Using API Version  1
	I0819 17:24:21.311277   36786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:24:21.311631   36786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:24:21.311829   36786 main.go:141] libmachine: (ha-227346-m04) Calling .GetIP
	I0819 17:24:21.314774   36786 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:24:21.315279   36786 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:21:48 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:24:21.315300   36786 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:24:21.315433   36786 host.go:66] Checking if "ha-227346-m04" exists ...
	I0819 17:24:21.315750   36786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:24:21.315806   36786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:24:21.331298   36786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0819 17:24:21.331711   36786 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:24:21.332132   36786 main.go:141] libmachine: Using API Version  1
	I0819 17:24:21.332153   36786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:24:21.332455   36786 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:24:21.332607   36786 main.go:141] libmachine: (ha-227346-m04) Calling .DriverName
	I0819 17:24:21.332791   36786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:24:21.332808   36786 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHHostname
	I0819 17:24:21.335905   36786 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:24:21.336303   36786 main.go:141] libmachine: (ha-227346-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:07:e1", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:21:48 +0000 UTC Type:0 Mac:52:54:00:dd:07:e1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-227346-m04 Clientid:01:52:54:00:dd:07:e1}
	I0819 17:24:21.336325   36786 main.go:141] libmachine: (ha-227346-m04) DBG | domain ha-227346-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:dd:07:e1 in network mk-ha-227346
	I0819 17:24:21.336463   36786 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHPort
	I0819 17:24:21.336665   36786 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHKeyPath
	I0819 17:24:21.336852   36786 main.go:141] libmachine: (ha-227346-m04) Calling .GetSSHUsername
	I0819 17:24:21.337031   36786 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346-m04/id_rsa Username:docker}
	W0819 17:24:39.853028   36786 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.96:22: connect: no route to host
	W0819 17:24:39.853130   36786 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	E0819 17:24:39.853153   36786 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host
	I0819 17:24:39.853165   36786 status.go:257] ha-227346-m04 status: &{Name:ha-227346-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0819 17:24:39.853188   36786 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.96:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-227346 -n ha-227346
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-227346 logs -n 25: (1.6277207s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-227346 ssh -n ha-227346-m02 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m03_ha-227346-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04:/home/docker/cp-test_ha-227346-m03_ha-227346-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m04 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m03_ha-227346-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp testdata/cp-test.txt                                               | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile955442382/001/cp-test_ha-227346-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346:/home/docker/cp-test_ha-227346-m04_ha-227346.txt                      |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346 sudo cat                                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346.txt                                |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m02:/home/docker/cp-test_ha-227346-m04_ha-227346-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m02 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m03:/home/docker/cp-test_ha-227346-m04_ha-227346-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n                                                                | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | ha-227346-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-227346 ssh -n ha-227346-m03 sudo cat                                         | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC | 19 Aug 24 17:13 UTC |
	|         | /home/docker/cp-test_ha-227346-m04_ha-227346-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-227346 node stop m02 -v=7                                                    | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:13 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-227346 node start m02 -v=7                                                   | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:16 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-227346 -v=7                                                          | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-227346 -v=7                                                               | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:17 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-227346 --wait=true -v=7                                                   | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:19 UTC | 19 Aug 24 17:22 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-227346                                                               | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:22 UTC |                     |
	| node    | ha-227346 node delete m03 -v=7                                                  | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:22 UTC | 19 Aug 24 17:22 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-227346 stop -v=7                                                             | ha-227346 | jenkins | v1.33.1 | 19 Aug 24 17:22 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:19:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:19:06.628669   34814 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:19:06.628804   34814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:19:06.628814   34814 out.go:358] Setting ErrFile to fd 2...
	I0819 17:19:06.628820   34814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:19:06.628983   34814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:19:06.629523   34814 out.go:352] Setting JSON to false
	I0819 17:19:06.630426   34814 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3692,"bootTime":1724084255,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:19:06.630480   34814 start.go:139] virtualization: kvm guest
	I0819 17:19:06.632778   34814 out.go:177] * [ha-227346] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:19:06.634106   34814 notify.go:220] Checking for updates...
	I0819 17:19:06.634156   34814 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:19:06.635413   34814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:19:06.636677   34814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:19:06.637830   34814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:19:06.639034   34814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:19:06.640253   34814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:19:06.641914   34814 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:19:06.642038   34814 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:19:06.642487   34814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:19:06.642552   34814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:19:06.658203   34814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0819 17:19:06.658695   34814 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:19:06.659270   34814 main.go:141] libmachine: Using API Version  1
	I0819 17:19:06.659293   34814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:19:06.659608   34814 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:19:06.659764   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:06.695358   34814 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 17:19:06.696788   34814 start.go:297] selected driver: kvm2
	I0819 17:19:06.696815   34814 start.go:901] validating driver "kvm2" against &{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.96 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:19:06.696964   34814 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:19:06.697308   34814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:19:06.697385   34814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:19:06.713467   34814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:19:06.714403   34814 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:19:06.714451   34814 cni.go:84] Creating CNI manager for ""
	I0819 17:19:06.714463   34814 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 17:19:06.714533   34814 start.go:340] cluster config:
	{Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.96 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:19:06.714719   34814 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:19:06.716696   34814 out.go:177] * Starting "ha-227346" primary control-plane node in "ha-227346" cluster
	I0819 17:19:06.717774   34814 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:19:06.717802   34814 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:19:06.717811   34814 cache.go:56] Caching tarball of preloaded images
	I0819 17:19:06.717893   34814 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:19:06.717903   34814 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:19:06.718012   34814 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/config.json ...
	I0819 17:19:06.718204   34814 start.go:360] acquireMachinesLock for ha-227346: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:19:06.718250   34814 start.go:364] duration metric: took 28.188µs to acquireMachinesLock for "ha-227346"
	I0819 17:19:06.718269   34814 start.go:96] Skipping create...Using existing machine configuration
	I0819 17:19:06.718287   34814 fix.go:54] fixHost starting: 
	I0819 17:19:06.718551   34814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:19:06.718580   34814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:19:06.732629   34814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0819 17:19:06.733100   34814 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:19:06.733590   34814 main.go:141] libmachine: Using API Version  1
	I0819 17:19:06.733616   34814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:19:06.733941   34814 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:19:06.734149   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:06.734302   34814 main.go:141] libmachine: (ha-227346) Calling .GetState
	I0819 17:19:06.735796   34814 fix.go:112] recreateIfNeeded on ha-227346: state=Running err=<nil>
	W0819 17:19:06.735824   34814 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 17:19:06.737882   34814 out.go:177] * Updating the running kvm2 "ha-227346" VM ...
	I0819 17:19:06.739003   34814 machine.go:93] provisionDockerMachine start ...
	I0819 17:19:06.739030   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:06.739233   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:06.741786   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.742162   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:06.742188   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.742341   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:06.742510   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.742674   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.742817   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:06.742996   34814 main.go:141] libmachine: Using SSH client type: native
	I0819 17:19:06.743230   34814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:19:06.743243   34814 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 17:19:06.849383   34814 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-227346
	
	I0819 17:19:06.849407   34814 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:19:06.849662   34814 buildroot.go:166] provisioning hostname "ha-227346"
	I0819 17:19:06.849685   34814 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:19:06.849856   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:06.852359   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.852792   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:06.852817   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.852969   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:06.853130   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.853288   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.853399   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:06.853572   34814 main.go:141] libmachine: Using SSH client type: native
	I0819 17:19:06.853782   34814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:19:06.853799   34814 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-227346 && echo "ha-227346" | sudo tee /etc/hostname
	I0819 17:19:06.975920   34814 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-227346
	
	I0819 17:19:06.975946   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:06.978445   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.978839   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:06.978866   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:06.979061   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:06.979269   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.979408   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:06.979528   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:06.979684   34814 main.go:141] libmachine: Using SSH client type: native
	I0819 17:19:06.979892   34814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:19:06.979916   34814 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-227346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-227346/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-227346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:19:07.090795   34814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:19:07.090829   34814 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:19:07.090848   34814 buildroot.go:174] setting up certificates
	I0819 17:19:07.090858   34814 provision.go:84] configureAuth start
	I0819 17:19:07.090870   34814 main.go:141] libmachine: (ha-227346) Calling .GetMachineName
	I0819 17:19:07.091142   34814 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:19:07.093781   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.094254   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:07.094285   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.094428   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:07.096812   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.097232   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:07.097253   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.097399   34814 provision.go:143] copyHostCerts
	I0819 17:19:07.097445   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:19:07.097527   34814 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:19:07.097547   34814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:19:07.097624   34814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:19:07.097752   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:19:07.097779   34814 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:19:07.097788   34814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:19:07.097835   34814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:19:07.097925   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:19:07.097953   34814 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:19:07.097961   34814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:19:07.097998   34814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:19:07.098083   34814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.ha-227346 san=[127.0.0.1 192.168.39.205 ha-227346 localhost minikube]
	I0819 17:19:07.195527   34814 provision.go:177] copyRemoteCerts
	I0819 17:19:07.195604   34814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:19:07.195627   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:07.198284   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.198652   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:07.198682   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.198852   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:07.199095   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:07.199278   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:07.199425   34814 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:19:07.284575   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 17:19:07.284653   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:19:07.310417   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 17:19:07.310504   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 17:19:07.336833   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 17:19:07.336901   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 17:19:07.361454   34814 provision.go:87] duration metric: took 270.584231ms to configureAuth
	I0819 17:19:07.361477   34814 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:19:07.361733   34814 config.go:182] Loaded profile config "ha-227346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:19:07.361810   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:07.364415   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.364768   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:07.364805   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:07.364936   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:07.365108   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:07.365264   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:07.365378   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:07.365508   34814 main.go:141] libmachine: Using SSH client type: native
	I0819 17:19:07.365686   34814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:19:07.365708   34814 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:19:13.020156   34814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:19:13.020178   34814 machine.go:96] duration metric: took 6.281158215s to provisionDockerMachine
	I0819 17:19:13.020189   34814 start.go:293] postStartSetup for "ha-227346" (driver="kvm2")
	I0819 17:19:13.020198   34814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:19:13.020212   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.020567   34814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:19:13.020591   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:13.023566   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.023903   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.023929   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.024088   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:13.024280   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.024457   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:13.024577   34814 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:19:13.150380   34814 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:19:13.157408   34814 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:19:13.157446   34814 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:19:13.157503   34814 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:19:13.157575   34814 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:19:13.157585   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /etc/ssl/certs/178372.pem
	I0819 17:19:13.157660   34814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:19:13.219995   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:19:13.282930   34814 start.go:296] duration metric: took 262.713473ms for postStartSetup
	I0819 17:19:13.282969   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.283284   34814 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0819 17:19:13.283328   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:13.286488   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.286871   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.286905   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.287225   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:13.287431   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.287618   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:13.287843   34814 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	W0819 17:19:13.482666   34814 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0819 17:19:13.482694   34814 fix.go:56] duration metric: took 6.764414155s for fixHost
	I0819 17:19:13.482716   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:13.485926   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.486337   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.486366   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.486573   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:13.486753   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.486946   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.487095   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:13.487278   34814 main.go:141] libmachine: Using SSH client type: native
	I0819 17:19:13.487531   34814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0819 17:19:13.487549   34814 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:19:13.850125   34814 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724087953.810707998
	
	I0819 17:19:13.850155   34814 fix.go:216] guest clock: 1724087953.810707998
	I0819 17:19:13.850165   34814 fix.go:229] Guest: 2024-08-19 17:19:13.810707998 +0000 UTC Remote: 2024-08-19 17:19:13.482702262 +0000 UTC m=+6.888183844 (delta=328.005736ms)
	I0819 17:19:13.850214   34814 fix.go:200] guest clock delta is within tolerance: 328.005736ms
	I0819 17:19:13.850221   34814 start.go:83] releasing machines lock for "ha-227346", held for 7.131959558s
	I0819 17:19:13.850249   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.850502   34814 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:19:13.853336   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.853743   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.853773   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.853940   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.854470   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.854637   34814 main.go:141] libmachine: (ha-227346) Calling .DriverName
	I0819 17:19:13.854751   34814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:19:13.854799   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:13.854829   34814 ssh_runner.go:195] Run: cat /version.json
	I0819 17:19:13.854851   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHHostname
	I0819 17:19:13.857052   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.857394   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.857420   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.857440   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.857574   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:13.857774   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.857888   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:13.857907   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:13.857910   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:13.858105   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHPort
	I0819 17:19:13.858125   34814 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:19:13.858234   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHKeyPath
	I0819 17:19:13.858350   34814 main.go:141] libmachine: (ha-227346) Calling .GetSSHUsername
	I0819 17:19:13.858446   34814 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/ha-227346/id_rsa Username:docker}
	I0819 17:19:14.078065   34814 ssh_runner.go:195] Run: systemctl --version
	I0819 17:19:14.104899   34814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:19:14.559594   34814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:19:14.566915   34814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:19:14.566989   34814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:19:14.576686   34814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 17:19:14.576705   34814 start.go:495] detecting cgroup driver to use...
	I0819 17:19:14.576808   34814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:19:14.594018   34814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:19:14.608656   34814 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:19:14.608721   34814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:19:14.623490   34814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:19:14.636786   34814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:19:14.823555   34814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:19:15.034554   34814 docker.go:233] disabling docker service ...
	I0819 17:19:15.034628   34814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:19:15.054320   34814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:19:15.071384   34814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:19:15.265315   34814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:19:15.443241   34814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:19:15.458039   34814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:19:15.490664   34814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:19:15.490744   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.505623   34814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:19:15.505726   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.518043   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.530814   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.546281   34814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:19:15.563592   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.575696   34814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.588454   34814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:19:15.600217   34814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:19:15.611384   34814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:19:15.621080   34814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:19:15.801102   34814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:19:25.505905   34814 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.704764817s)
	I0819 17:19:25.505953   34814 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:19:25.506016   34814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:19:25.510527   34814 start.go:563] Will wait 60s for crictl version
	I0819 17:19:25.510575   34814 ssh_runner.go:195] Run: which crictl
	I0819 17:19:25.514507   34814 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:19:25.550421   34814 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:19:25.550506   34814 ssh_runner.go:195] Run: crio --version
	I0819 17:19:25.578068   34814 ssh_runner.go:195] Run: crio --version
	I0819 17:19:25.607507   34814 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:19:25.608737   34814 main.go:141] libmachine: (ha-227346) Calling .GetIP
	I0819 17:19:25.611398   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:25.611745   34814 main.go:141] libmachine: (ha-227346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:14:7f", ip: ""} in network mk-ha-227346: {Iface:virbr1 ExpiryTime:2024-08-19 18:09:18 +0000 UTC Type:0 Mac:52:54:00:ba:14:7f Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-227346 Clientid:01:52:54:00:ba:14:7f}
	I0819 17:19:25.611775   34814 main.go:141] libmachine: (ha-227346) DBG | domain ha-227346 has defined IP address 192.168.39.205 and MAC address 52:54:00:ba:14:7f in network mk-ha-227346
	I0819 17:19:25.611972   34814 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:19:25.616275   34814 kubeadm.go:883] updating cluster {Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.96 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:19:25.616423   34814 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:19:25.616465   34814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:19:25.665452   34814 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:19:25.665475   34814 crio.go:433] Images already preloaded, skipping extraction
	I0819 17:19:25.665539   34814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:19:25.698060   34814 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:19:25.698081   34814 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:19:25.698090   34814 kubeadm.go:934] updating node { 192.168.39.205 8443 v1.31.0 crio true true} ...
	I0819 17:19:25.698197   34814 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-227346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:19:25.698272   34814 ssh_runner.go:195] Run: crio config
	I0819 17:19:25.746476   34814 cni.go:84] Creating CNI manager for ""
	I0819 17:19:25.746502   34814 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0819 17:19:25.746514   34814 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:19:25.746542   34814 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-227346 NodeName:ha-227346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:19:25.746735   34814 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-227346"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:19:25.746766   34814 kube-vip.go:115] generating kube-vip config ...
	I0819 17:19:25.746810   34814 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0819 17:19:25.757770   34814 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 17:19:25.757881   34814 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 17:19:25.757941   34814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:19:25.767080   34814 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:19:25.767129   34814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 17:19:25.775863   34814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0819 17:19:25.791381   34814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:19:25.806261   34814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0819 17:19:25.821946   34814 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 17:19:25.838039   34814 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0819 17:19:25.843381   34814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:19:25.982876   34814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:19:25.996514   34814 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346 for IP: 192.168.39.205
	I0819 17:19:25.996564   34814 certs.go:194] generating shared ca certs ...
	I0819 17:19:25.996584   34814 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:19:25.996770   34814 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:19:25.996825   34814 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:19:25.996841   34814 certs.go:256] generating profile certs ...
	I0819 17:19:25.996956   34814 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/client.key
	I0819 17:19:25.996991   34814 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.40fdd3cf
	I0819 17:19:25.997010   34814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.40fdd3cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205 192.168.39.189 192.168.39.95 192.168.39.254]
	I0819 17:19:26.302685   34814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.40fdd3cf ...
	I0819 17:19:26.302721   34814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.40fdd3cf: {Name:mkcad67e542334192c3bbfd9c0d1662abd4a6acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:19:26.302883   34814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.40fdd3cf ...
	I0819 17:19:26.302894   34814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.40fdd3cf: {Name:mk7238b084053b19a8639324314e3f7dc6d64dcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:19:26.302968   34814 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt.40fdd3cf -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt
	I0819 17:19:26.303115   34814 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key.40fdd3cf -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key
	I0819 17:19:26.303234   34814 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key
	I0819 17:19:26.303254   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 17:19:26.303266   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 17:19:26.303279   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 17:19:26.303292   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 17:19:26.303304   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 17:19:26.303316   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 17:19:26.303332   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 17:19:26.303344   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 17:19:26.303389   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:19:26.303416   34814 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:19:26.303424   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:19:26.303445   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:19:26.303468   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:19:26.303490   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:19:26.303526   34814 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:19:26.303552   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:19:26.303566   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem -> /usr/share/ca-certificates/17837.pem
	I0819 17:19:26.303578   34814 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /usr/share/ca-certificates/178372.pem
	I0819 17:19:26.304092   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:19:26.327916   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:19:26.349621   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:19:26.371172   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:19:26.392952   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0819 17:19:26.414840   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:19:26.437655   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:19:26.459800   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/ha-227346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:19:26.482472   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:19:26.505419   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:19:26.527376   34814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:19:26.549938   34814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:19:26.565346   34814 ssh_runner.go:195] Run: openssl version
	I0819 17:19:26.570651   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:19:26.580647   34814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:19:26.585302   34814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:19:26.585355   34814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:19:26.590571   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:19:26.600327   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:19:26.610351   34814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:19:26.614594   34814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:19:26.614648   34814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:19:26.620064   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:19:26.629127   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:19:26.639202   34814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:19:26.643275   34814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:19:26.643337   34814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:19:26.648685   34814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:19:26.657572   34814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:19:26.661674   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 17:19:26.667153   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 17:19:26.672449   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 17:19:26.677889   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 17:19:26.683406   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 17:19:26.688581   34814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 17:19:26.693750   34814 kubeadm.go:392] StartCluster: {Name:ha-227346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-227346 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.189 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.96 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:19:26.693852   34814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:19:26.693888   34814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:19:26.728530   34814 cri.go:89] found id: "2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35"
	I0819 17:19:26.728556   34814 cri.go:89] found id: "9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f"
	I0819 17:19:26.728560   34814 cri.go:89] found id: "0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec"
	I0819 17:19:26.728564   34814 cri.go:89] found id: "b1163229fd0594539dc14e331d7cb09e7e69ac7030bf1399e654134fe2dd9792"
	I0819 17:19:26.728566   34814 cri.go:89] found id: "a909e07d87a29b9d6d81cf334d38e7b1829a3144044d74cb62a473deecdb3ef3"
	I0819 17:19:26.728569   34814 cri.go:89] found id: "2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d"
	I0819 17:19:26.728572   34814 cri.go:89] found id: "7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422"
	I0819 17:19:26.728575   34814 cri.go:89] found id: "681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9"
	I0819 17:19:26.728577   34814 cri.go:89] found id: "cc4afe373b9078bd6d32f8a9d5cda79bc9337d0fb22df3f80f1035725bcce3ac"
	I0819 17:19:26.728581   34814 cri.go:89] found id: "5d57a3c3b41e0c42cbc8e17808dbca8183361c3346f7448a85689ae54d35c28c"
	I0819 17:19:26.728585   34814 cri.go:89] found id: "64b09216d35d7b5d721e84026ab86c730b012b8603b100f6efb159f59ff28390"
	I0819 17:19:26.728588   34814 cri.go:89] found id: "0624a8dba0695bbb9f25e378b2f27266931b5e53e4c7e7efea0a0e4c36caa6f4"
	I0819 17:19:26.728592   34814 cri.go:89] found id: "e4e823e549cc30c59d88eaa68e65edcb083eaadad0a8f1266c9350ad48d548a6"
	I0819 17:19:26.728594   34814 cri.go:89] found id: "59dabea0b2cb125ebdd40ee96fbea8d8388e7e888b40631afa01e5176dd14fe9"
	I0819 17:19:26.728598   34814 cri.go:89] found id: "25c817915a7dfc3c87b8700adc34887a5436d4c39cfaf2371e00d23c845c05dd"
	I0819 17:19:26.728601   34814 cri.go:89] found id: "7367ba44817a269d4ccef71fac3e87c6b90d85908a99dbe5f4b619f63add1547"
	I0819 17:19:26.728603   34814 cri.go:89] found id: "ded6224ece6e43ce84257df9ddc5df9f3ac09c723d408383eb7e0e2226cc8577"
	I0819 17:19:26.728607   34814 cri.go:89] found id: "c1727fa7d7c9f4aff6f62922a6641830d3f6abc77500410fdd4462a42da906ed"
	I0819 17:19:26.728610   34814 cri.go:89] found id: ""
	I0819 17:19:26.728645   34814 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.441514048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088280441489533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3650d8e8-8592-4da5-8a80-8e0271107d32 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.442168753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10b2d5ff-9e9c-4e32-8dca-1dbec78255a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.442232679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10b2d5ff-9e9c-4e32-8dca-1dbec78255a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.442616637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db204c16e84ff5148c09208a263c4e9b416163857d8e0e6a38ca4bd8883cbfb8,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724088019992154134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7697b63732dd2bc3cdd85bac53ba8e9f05cc44d05bb9fa51923140276ad47ed6,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724088015988671019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc00ac73decf51171a36e73e17f220e2d500056daab06c38c1dcf6d67e012c8f,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724088012988791207,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a231cce4062c4cb9d7052c4156d13bbdce7dfef85f558ba652154251aa290199,PodSandboxId:162f86b2ab34c5f2f4ec6f86f0f02431f9709ee637bddf59389157cd732d7fd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087986994836837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8de80bd2e8f1a1c4a3915a97d273a9d77bde5920160f1b11279a793ba3d932,PodSandboxId:f8bbfa8f41732a9352b46470fd2f2ebcb72281106e66c6e6cabf34002a2a68cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087985989558637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-
7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17f7dbe7da34b5bf99e7da879c06b219529ccff7f93c9aa182264eb59f25d74,PodSandboxId:a4d2be5c777dc4a457f7a72876b6277875dd5c726f896d52ff3a831c8f608b7e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724087979296746113,Labe
ls:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88411286fdced6b3ee02688711f6f43,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49db2955c575381cfdfdc12a855fd70c06b7450b4ac4ddd6554400f704ca9264,PodSandboxId:4858d061f29cac5938ae77d70b4b14b922d241f96996abb18366192e66f1f1bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724087969204590289,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d8e85b57f15ba9de3964a060b9fc3d80e63cd200db7ae096c327c72548927f,PodSandboxId:a5dd1c02893ebeb57f72fc76ad7ab9ddcccf772053856130637956b7bc03feef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724087969158762105,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kube
rnetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0b325ce6a57084c4ab883d9d95a489c2a9802412ad408dd356f6aebf08666c,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724087969191161025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserv
er-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a786925478954498e3879504434075afb91e23c1397e4263ab6747753172c032,PodSandboxId:e8f4568d8f119502c15989a2743f55cd54c95aa91f971d368d78acbc86a85983,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087969125294820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdf151e1296e20d68c07590c24fdf6c369d5b2edd5612d0f0041e4650e04f3b,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724087969139226401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359d51dcc978ce64dc588791333352138d33598730fd0c21fd62e54c1e9d833b,PodSandboxId:378bf7054ff3eb3972d32554a9797a7bef7683421aef723318815182174042c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087968892612213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50368df823e9293c1b958812b27fe383e5864cb749b66ac569626a5fa60c4ad4,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724087968843043489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502
e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35,PodSandboxId:de532343318745253aa7be80ee08b09bc84f1f4c8bbb62d49954c0f4e0d17172,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954210933943,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},
Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec,PodSandboxId:230c20f0fcd600367e75d96536069b6e404f65daf893c0dd8effc5a64d47cdc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724087953950200796,Labels:map[string
]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f,PodSandboxId:e034002a0c3bd629f7a1ec28dc607515752c38b3b64200aa674fe1df70fc63b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954009499316,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422,PodSandboxId:f166e32b9510a72fc238edc7f4a4b10477991c274d710fe9ca08ed3092f0790d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724087953673511183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d,PodSandboxId:230a5e56294f7478c3bf2f2659793f258dfb59f2fdda4f2cbb579d62e2684ce5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c
897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724087953674351607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9,PodSandboxId:b49334661e0b83e24b2cf9073bedb43670f4ada00bcceb5c7c34e78ab07d4c6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b63
27e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724087953646377268,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10b2d5ff-9e9c-4e32-8dca-1dbec78255a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.487952781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22c7dc9d-d926-4555-8480-91d436410c7d name=/runtime.v1.RuntimeService/Version
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.488031961Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22c7dc9d-d926-4555-8480-91d436410c7d name=/runtime.v1.RuntimeService/Version
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.489355908Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5a5eb8d-05d7-4f95-b19a-af5a76cc2e25 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.489837024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088280489811532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5a5eb8d-05d7-4f95-b19a-af5a76cc2e25 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.490531039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29f7d9f5-c9f8-492e-be44-e28e1c68fcb1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.490597138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29f7d9f5-c9f8-492e-be44-e28e1c68fcb1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.490980715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db204c16e84ff5148c09208a263c4e9b416163857d8e0e6a38ca4bd8883cbfb8,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724088019992154134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7697b63732dd2bc3cdd85bac53ba8e9f05cc44d05bb9fa51923140276ad47ed6,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724088015988671019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc00ac73decf51171a36e73e17f220e2d500056daab06c38c1dcf6d67e012c8f,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724088012988791207,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a231cce4062c4cb9d7052c4156d13bbdce7dfef85f558ba652154251aa290199,PodSandboxId:162f86b2ab34c5f2f4ec6f86f0f02431f9709ee637bddf59389157cd732d7fd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087986994836837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8de80bd2e8f1a1c4a3915a97d273a9d77bde5920160f1b11279a793ba3d932,PodSandboxId:f8bbfa8f41732a9352b46470fd2f2ebcb72281106e66c6e6cabf34002a2a68cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087985989558637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-
7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17f7dbe7da34b5bf99e7da879c06b219529ccff7f93c9aa182264eb59f25d74,PodSandboxId:a4d2be5c777dc4a457f7a72876b6277875dd5c726f896d52ff3a831c8f608b7e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724087979296746113,Labe
ls:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88411286fdced6b3ee02688711f6f43,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49db2955c575381cfdfdc12a855fd70c06b7450b4ac4ddd6554400f704ca9264,PodSandboxId:4858d061f29cac5938ae77d70b4b14b922d241f96996abb18366192e66f1f1bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724087969204590289,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d8e85b57f15ba9de3964a060b9fc3d80e63cd200db7ae096c327c72548927f,PodSandboxId:a5dd1c02893ebeb57f72fc76ad7ab9ddcccf772053856130637956b7bc03feef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724087969158762105,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kube
rnetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0b325ce6a57084c4ab883d9d95a489c2a9802412ad408dd356f6aebf08666c,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724087969191161025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserv
er-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a786925478954498e3879504434075afb91e23c1397e4263ab6747753172c032,PodSandboxId:e8f4568d8f119502c15989a2743f55cd54c95aa91f971d368d78acbc86a85983,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087969125294820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdf151e1296e20d68c07590c24fdf6c369d5b2edd5612d0f0041e4650e04f3b,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724087969139226401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359d51dcc978ce64dc588791333352138d33598730fd0c21fd62e54c1e9d833b,PodSandboxId:378bf7054ff3eb3972d32554a9797a7bef7683421aef723318815182174042c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087968892612213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50368df823e9293c1b958812b27fe383e5864cb749b66ac569626a5fa60c4ad4,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724087968843043489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502
e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35,PodSandboxId:de532343318745253aa7be80ee08b09bc84f1f4c8bbb62d49954c0f4e0d17172,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954210933943,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},
Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec,PodSandboxId:230c20f0fcd600367e75d96536069b6e404f65daf893c0dd8effc5a64d47cdc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724087953950200796,Labels:map[string
]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f,PodSandboxId:e034002a0c3bd629f7a1ec28dc607515752c38b3b64200aa674fe1df70fc63b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954009499316,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422,PodSandboxId:f166e32b9510a72fc238edc7f4a4b10477991c274d710fe9ca08ed3092f0790d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724087953673511183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d,PodSandboxId:230a5e56294f7478c3bf2f2659793f258dfb59f2fdda4f2cbb579d62e2684ce5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c
897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724087953674351607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9,PodSandboxId:b49334661e0b83e24b2cf9073bedb43670f4ada00bcceb5c7c34e78ab07d4c6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b63
27e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724087953646377268,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29f7d9f5-c9f8-492e-be44-e28e1c68fcb1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.531912403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fabcd642-a31e-427f-a19a-e24e8e38c329 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.532000857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fabcd642-a31e-427f-a19a-e24e8e38c329 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.532949697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff826a87-0d7b-4c17-b715-33bd66a7ed2e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.533513405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088280533488274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff826a87-0d7b-4c17-b715-33bd66a7ed2e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.533970305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d040d9bf-23e0-4fb8-aa80-9bf5b95d9022 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.534039351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d040d9bf-23e0-4fb8-aa80-9bf5b95d9022 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.534486141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db204c16e84ff5148c09208a263c4e9b416163857d8e0e6a38ca4bd8883cbfb8,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724088019992154134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7697b63732dd2bc3cdd85bac53ba8e9f05cc44d05bb9fa51923140276ad47ed6,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724088015988671019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc00ac73decf51171a36e73e17f220e2d500056daab06c38c1dcf6d67e012c8f,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724088012988791207,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a231cce4062c4cb9d7052c4156d13bbdce7dfef85f558ba652154251aa290199,PodSandboxId:162f86b2ab34c5f2f4ec6f86f0f02431f9709ee637bddf59389157cd732d7fd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087986994836837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8de80bd2e8f1a1c4a3915a97d273a9d77bde5920160f1b11279a793ba3d932,PodSandboxId:f8bbfa8f41732a9352b46470fd2f2ebcb72281106e66c6e6cabf34002a2a68cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087985989558637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-
7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17f7dbe7da34b5bf99e7da879c06b219529ccff7f93c9aa182264eb59f25d74,PodSandboxId:a4d2be5c777dc4a457f7a72876b6277875dd5c726f896d52ff3a831c8f608b7e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724087979296746113,Labe
ls:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88411286fdced6b3ee02688711f6f43,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49db2955c575381cfdfdc12a855fd70c06b7450b4ac4ddd6554400f704ca9264,PodSandboxId:4858d061f29cac5938ae77d70b4b14b922d241f96996abb18366192e66f1f1bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724087969204590289,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d8e85b57f15ba9de3964a060b9fc3d80e63cd200db7ae096c327c72548927f,PodSandboxId:a5dd1c02893ebeb57f72fc76ad7ab9ddcccf772053856130637956b7bc03feef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724087969158762105,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kube
rnetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0b325ce6a57084c4ab883d9d95a489c2a9802412ad408dd356f6aebf08666c,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724087969191161025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserv
er-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a786925478954498e3879504434075afb91e23c1397e4263ab6747753172c032,PodSandboxId:e8f4568d8f119502c15989a2743f55cd54c95aa91f971d368d78acbc86a85983,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087969125294820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdf151e1296e20d68c07590c24fdf6c369d5b2edd5612d0f0041e4650e04f3b,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724087969139226401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359d51dcc978ce64dc588791333352138d33598730fd0c21fd62e54c1e9d833b,PodSandboxId:378bf7054ff3eb3972d32554a9797a7bef7683421aef723318815182174042c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087968892612213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50368df823e9293c1b958812b27fe383e5864cb749b66ac569626a5fa60c4ad4,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724087968843043489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502
e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35,PodSandboxId:de532343318745253aa7be80ee08b09bc84f1f4c8bbb62d49954c0f4e0d17172,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954210933943,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},
Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec,PodSandboxId:230c20f0fcd600367e75d96536069b6e404f65daf893c0dd8effc5a64d47cdc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724087953950200796,Labels:map[string
]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f,PodSandboxId:e034002a0c3bd629f7a1ec28dc607515752c38b3b64200aa674fe1df70fc63b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954009499316,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422,PodSandboxId:f166e32b9510a72fc238edc7f4a4b10477991c274d710fe9ca08ed3092f0790d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724087953673511183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d,PodSandboxId:230a5e56294f7478c3bf2f2659793f258dfb59f2fdda4f2cbb579d62e2684ce5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c
897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724087953674351607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9,PodSandboxId:b49334661e0b83e24b2cf9073bedb43670f4ada00bcceb5c7c34e78ab07d4c6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b63
27e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724087953646377268,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d040d9bf-23e0-4fb8-aa80-9bf5b95d9022 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.572972092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc9744a9-43ec-4927-95f5-2ef9ea13f6ca name=/runtime.v1.RuntimeService/Version
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.573096710Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc9744a9-43ec-4927-95f5-2ef9ea13f6ca name=/runtime.v1.RuntimeService/Version
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.574205859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ea3f123-e956-4cd0-af71-add6b2689427 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.574733566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088280574697266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ea3f123-e956-4cd0-af71-add6b2689427 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.575229927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=298c3897-cd6a-47bb-9e69-b2cd7a525a96 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.575310336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=298c3897-cd6a-47bb-9e69-b2cd7a525a96 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:24:40 ha-227346 crio[4086]: time="2024-08-19 17:24:40.575708946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db204c16e84ff5148c09208a263c4e9b416163857d8e0e6a38ca4bd8883cbfb8,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724088019992154134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7697b63732dd2bc3cdd85bac53ba8e9f05cc44d05bb9fa51923140276ad47ed6,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724088015988671019,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc00ac73decf51171a36e73e17f220e2d500056daab06c38c1dcf6d67e012c8f,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724088012988791207,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a231cce4062c4cb9d7052c4156d13bbdce7dfef85f558ba652154251aa290199,PodSandboxId:162f86b2ab34c5f2f4ec6f86f0f02431f9709ee637bddf59389157cd732d7fd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087986994836837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8de80bd2e8f1a1c4a3915a97d273a9d77bde5920160f1b11279a793ba3d932,PodSandboxId:f8bbfa8f41732a9352b46470fd2f2ebcb72281106e66c6e6cabf34002a2a68cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724087985989558637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-
7a028d357471,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17f7dbe7da34b5bf99e7da879c06b219529ccff7f93c9aa182264eb59f25d74,PodSandboxId:a4d2be5c777dc4a457f7a72876b6277875dd5c726f896d52ff3a831c8f608b7e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724087979296746113,Labe
ls:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e88411286fdced6b3ee02688711f6f43,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49db2955c575381cfdfdc12a855fd70c06b7450b4ac4ddd6554400f704ca9264,PodSandboxId:4858d061f29cac5938ae77d70b4b14b922d241f96996abb18366192e66f1f1bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724087969204590289,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d8e85b57f15ba9de3964a060b9fc3d80e63cd200db7ae096c327c72548927f,PodSandboxId:a5dd1c02893ebeb57f72fc76ad7ab9ddcccf772053856130637956b7bc03feef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724087969158762105,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kube
rnetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0b325ce6a57084c4ab883d9d95a489c2a9802412ad408dd356f6aebf08666c,PodSandboxId:048a398695dddd76d68447f109658b8b037537aa87cd43da4446eac31f2ab611,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724087969191161025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserv
er-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b52c02ccbd2d84de74b795b4ffd2de47,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a786925478954498e3879504434075afb91e23c1397e4263ab6747753172c032,PodSandboxId:e8f4568d8f119502c15989a2743f55cd54c95aa91f971d368d78acbc86a85983,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724087969125294820,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdf151e1296e20d68c07590c24fdf6c369d5b2edd5612d0f0041e4650e04f3b,PodSandboxId:e300219ded16cd9a00c8181ad58326bd8fe09564fd9331c4c59c3fad70d0940e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724087969139226401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-227346,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0bb7d8f50a5822f2e8d4254badffee6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:359d51dcc978ce64dc588791333352138d33598730fd0c21fd62e54c1e9d833b,PodSandboxId:378bf7054ff3eb3972d32554a9797a7bef7683421aef723318815182174042c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724087968892612213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50368df823e9293c1b958812b27fe383e5864cb749b66ac569626a5fa60c4ad4,PodSandboxId:4855963e29b474efef64665635a82b2d259ffde68ea96fca29440dc1104e7327,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724087968843043489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ed502
e-5b16-4a13-9e5f-c1d271bea40b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35,PodSandboxId:de532343318745253aa7be80ee08b09bc84f1f4c8bbb62d49954c0f4e0d17172,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954210933943,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9s77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28ea7cc3-2a78-4b29-82c7-7a028d357471,},
Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec,PodSandboxId:230c20f0fcd600367e75d96536069b6e404f65daf893c0dd8effc5a64d47cdc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724087953950200796,Labels:map[string
]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwjmd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55731455-5f1e-4499-ae63-a8ad06f5553f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f,PodSandboxId:e034002a0c3bd629f7a1ec28dc607515752c38b3b64200aa674fe1df70fc63b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724087954009499316,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r68td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48b2c24-94f7-4ca4-8f99-420706cd0cb3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422,PodSandboxId:f166e32b9510a72fc238edc7f4a4b10477991c274d710fe9ca08ed3092f0790d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724087953673511183,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xpm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e3f9ad-e32e-4a45-9184-72fd5076b2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d,PodSandboxId:230a5e56294f7478c3bf2f2659793f258dfb59f2fdda4f2cbb579d62e2684ce5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c
897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724087953674351607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d16d043cd45d88d7e4b5a95563c9d12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9,PodSandboxId:b49334661e0b83e24b2cf9073bedb43670f4ada00bcceb5c7c34e78ab07d4c6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b63
27e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724087953646377268,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-227346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b77a066592a139540b9afb0badf56c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=298c3897-cd6a-47bb-9e69-b2cd7a525a96 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db204c16e84ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 minutes ago       Running             storage-provisioner       3                   4855963e29b47       storage-provisioner
	7697b63732dd2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   4 minutes ago       Running             kube-controller-manager   3                   e300219ded16c       kube-controller-manager-ha-227346
	fc00ac73decf5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   4 minutes ago       Running             kube-apiserver            4                   048a398695ddd       kube-apiserver-ha-227346
	a231cce4062c4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 minutes ago       Running             coredns                   2                   162f86b2ab34c       coredns-6f6b679f8f-r68td
	6a8de80bd2e8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 minutes ago       Running             coredns                   2                   f8bbfa8f41732       coredns-6f6b679f8f-9s77g
	a17f7dbe7da34       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   5 minutes ago       Running             kube-vip                  0                   a4d2be5c777dc       kube-vip-ha-227346
	49db2955c5753       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   5 minutes ago       Running             kube-proxy                2                   4858d061f29ca       kube-proxy-9xpm4
	2e0b325ce6a57       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   5 minutes ago       Exited              kube-apiserver            3                   048a398695ddd       kube-apiserver-ha-227346
	b3d8e85b57f15       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   5 minutes ago       Running             kindnet-cni               2                   a5dd1c02893eb       kindnet-lwjmd
	0bdf151e1296e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   5 minutes ago       Exited              kube-controller-manager   2                   e300219ded16c       kube-controller-manager-ha-227346
	a786925478954       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   5 minutes ago       Running             etcd                      2                   e8f4568d8f119       etcd-ha-227346
	359d51dcc978c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   5 minutes ago       Running             kube-scheduler            2                   378bf7054ff3e       kube-scheduler-ha-227346
	50368df823e92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago       Exited              storage-provisioner       2                   4855963e29b47       storage-provisioner
	2fd299c1d9e8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   5 minutes ago       Exited              coredns                   1                   de53234331874       coredns-6f6b679f8f-9s77g
	9a18b773d6ac1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   5 minutes ago       Exited              coredns                   1                   e034002a0c3bd       coredns-6f6b679f8f-r68td
	0210787eef3fd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   5 minutes ago       Exited              kindnet-cni               1                   230c20f0fcd60       kindnet-lwjmd
	2c9d9b1537d36       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   5 minutes ago       Exited              kube-scheduler            1                   230a5e56294f7       kube-scheduler-ha-227346
	7cce81e6b7d57       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   5 minutes ago       Exited              kube-proxy                1                   f166e32b9510a       kube-proxy-9xpm4
	681d8ae88a598       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   5 minutes ago       Exited              etcd                      1                   b49334661e0b8       etcd-ha-227346
	
	
	==> coredns [2fd299c1d9e8ff3997e330cd3728fd7186761ba9dc9ca76d0c4d62aeea006a35] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49322 - 15954 "HINFO IN 359130325150598962.3044329225708925938. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006159018s
	
	
	==> coredns [6a8de80bd2e8f1a1c4a3915a97d273a9d77bde5920160f1b11279a793ba3d932] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9a18b773d6ac1b63d103b7def44103a87743f495f56fe40fc07583736be6de0f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:34893 - 3413 "HINFO IN 2354490025026756339.4743662506029097874. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011007743s
	
	
	==> coredns [a231cce4062c4cb9d7052c4156d13bbdce7dfef85f558ba652154251aa290199] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-227346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_09_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:09:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:24:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:23:31 +0000   Mon, 19 Aug 2024 17:23:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:23:31 +0000   Mon, 19 Aug 2024 17:23:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:23:31 +0000   Mon, 19 Aug 2024 17:23:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:23:31 +0000   Mon, 19 Aug 2024 17:23:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    ha-227346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 80471ea49a664581949d80643cd4d82b
	  System UUID:                80471ea4-9a66-4581-949d-80643cd4d82b
	  Boot ID:                    b4e046ad-f0c8-4e0a-a3c8-ccc4927ebc7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-9s77g             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-r68td             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-227346                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-lwjmd                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-227346             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-227346    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-9xpm4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-227346             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-227346                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 4m27s              kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           4m30s              node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           4m22s              node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  RegisteredNode           3m18s              node-controller  Node ha-227346 event: Registered Node ha-227346 in Controller
	  Normal  NodeNotReady             102s               node-controller  Node ha-227346 status is now: NodeNotReady
	  Normal  NodeHasSufficientPID     69s (x2 over 14m)  kubelet          Node ha-227346 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    69s (x2 over 14m)  kubelet          Node ha-227346 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                69s (x2 over 14m)  kubelet          Node ha-227346 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  69s (x2 over 14m)  kubelet          Node ha-227346 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-227346-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_10_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:10:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:24:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:20:57 +0000   Mon, 19 Aug 2024 17:20:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:20:57 +0000   Mon, 19 Aug 2024 17:20:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:20:57 +0000   Mon, 19 Aug 2024 17:20:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:20:57 +0000   Mon, 19 Aug 2024 17:20:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    ha-227346-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 feb788fca1734d35a419eead2319624a
	  System UUID:                feb788fc-a173-4d35-a419-eead2319624a
	  Boot ID:                    95b934f2-5cf6-467f-930f-f1c65d975696
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dncbb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     busybox-7dff88458-k75xm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-227346-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-mk55z                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-227346-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-227346-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-6lhlp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-227346-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-227346-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 4m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-227346-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           14m                    node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  CIDRAssignmentFailed     14m                    cidrAllocator    Node ha-227346-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-227346-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-227346-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                    node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-227346-m02 status is now: NodeNotReady
	  Normal  Starting                 4m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node ha-227346-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node ha-227346-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node ha-227346-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m31s                  node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-227346-m02 event: Registered Node ha-227346-m02 in Controller
	
	
	Name:               ha-227346-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-227346-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=ha-227346
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_12_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:12:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-227346-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:22:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 17:21:53 +0000   Mon, 19 Aug 2024 17:22:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 17:21:53 +0000   Mon, 19 Aug 2024 17:22:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 17:21:53 +0000   Mon, 19 Aug 2024 17:22:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 17:21:53 +0000   Mon, 19 Aug 2024 17:22:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-227346-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8069ae3ff9145c9b8ed7bff35cdea96
	  System UUID:                d8069ae3-ff91-45c9-b8ed-7bff35cdea96
	  Boot ID:                    5f021679-6569-4e7d-8eea-422cae4a7c93
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zjvnd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kindnet-sctvz              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-7ktdr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                    node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   CIDRAssignmentFailed     11m                    cidrAllocator    Node ha-227346-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)      kubelet          Node ha-227346-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)      kubelet          Node ha-227346-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)      kubelet          Node ha-227346-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                    node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-227346-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m31s                  node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   NodeNotReady             3m51s                  node-controller  Node ha-227346-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-227346-m04 event: Registered Node ha-227346-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-227346-m04 has been rebooted, boot id: 5f021679-6569-4e7d-8eea-422cae4a7c93
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-227346-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-227346-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-227346-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m48s                  kubelet          Node ha-227346-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s                   node-controller  Node ha-227346-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.218849] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.053481] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061538] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.190350] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.134022] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.260627] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +3.698622] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +3.234958] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.058962] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.409298] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.084115] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.075846] kauditd_printk_skb: 23 callbacks suppressed
	[Aug19 17:10] kauditd_printk_skb: 36 callbacks suppressed
	[ +43.945746] kauditd_printk_skb: 26 callbacks suppressed
	[Aug19 17:19] systemd-fstab-generator[3963]: Ignoring "noauto" option for root device
	[  +0.178314] systemd-fstab-generator[3988]: Ignoring "noauto" option for root device
	[  +0.254704] systemd-fstab-generator[4025]: Ignoring "noauto" option for root device
	[  +0.181116] systemd-fstab-generator[4042]: Ignoring "noauto" option for root device
	[  +0.362837] systemd-fstab-generator[4073]: Ignoring "noauto" option for root device
	[ +10.206975] systemd-fstab-generator[4370]: Ignoring "noauto" option for root device
	[  +0.087740] kauditd_printk_skb: 192 callbacks suppressed
	[ +10.067480] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.033540] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.000584] kauditd_printk_skb: 5 callbacks suppressed
	[Aug19 17:20] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [681d8ae88a59885a721182560f8d6c83769fc28283d21d7df6d41a6e655927a9] <==
	{"level":"info","ts":"2024-08-19T17:19:15.070750Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"63397357a2c0e4bd"}
	{"level":"info","ts":"2024-08-19T17:19:15.071129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e switched to configuration voters=(7149872703657272509 12889633661048190622)"}
	{"level":"info","ts":"2024-08-19T17:19:15.071211Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e"}
	{"level":"info","ts":"2024-08-19T17:19:15.080762Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:19:15.083555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e switched to configuration voters=(7149872703657272509 12889633661048190622) learners=(11157552390870920589)"}
	{"level":"info","ts":"2024-08-19T17:19:15.083640Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e","added-peer-id":"9ad796c8c4abed8d","added-peer-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2024-08-19T17:19:15.083672Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.083689Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.087419Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-19T17:19:15.087562Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:19:15.087598Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:19:15.087608Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:19:15.091357Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T17:19:15.091583Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b2e12d85c3b1f69e","initial-advertise-peer-urls":["https://192.168.39.205:2380"],"listen-peer-urls":["https://192.168.39.205:2380"],"advertise-client-urls":["https://192.168.39.205:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.205:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T17:19:15.091623Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T17:19:15.091693Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.205:2380"}
	{"level":"info","ts":"2024-08-19T17:19:15.091712Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.205:2380"}
	{"level":"info","ts":"2024-08-19T17:19:15.095791Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.095837Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d","remote-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2024-08-19T17:19:15.099391Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.099432Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.099446Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.099652Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:19:15.099856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e switched to configuration voters=(7149872703657272509 11157552390870920589 12889633661048190622)"}
	{"level":"info","ts":"2024-08-19T17:19:15.099909Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e"}
	
	
	==> etcd [a786925478954498e3879504434075afb91e23c1397e4263ab6747753172c032] <==
	{"level":"info","ts":"2024-08-19T17:21:13.623545Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:21:13.623635Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:21:13.645371Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b2e12d85c3b1f69e","to":"9ad796c8c4abed8d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-19T17:21:13.645505Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:21:13.653359Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b2e12d85c3b1f69e","to":"9ad796c8c4abed8d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-19T17:21:13.653436Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:22:06.640818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e switched to configuration voters=(7149872703657272509 12889633661048190622)"}
	{"level":"info","ts":"2024-08-19T17:22:06.642907Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e","removed-remote-peer-id":"9ad796c8c4abed8d","removed-remote-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2024-08-19T17:22:06.643037Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"warn","ts":"2024-08-19T17:22:06.643115Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"b2e12d85c3b1f69e","removed-member-id":"9ad796c8c4abed8d"}
	{"level":"warn","ts":"2024-08-19T17:22:06.643205Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-08-19T17:22:06.643926Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:22:06.644009Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"warn","ts":"2024-08-19T17:22:06.645181Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:22:06.645241Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:22:06.645350Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"warn","ts":"2024-08-19T17:22:06.645583Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d","error":"context canceled"}
	{"level":"warn","ts":"2024-08-19T17:22:06.645623Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"9ad796c8c4abed8d","error":"failed to read 9ad796c8c4abed8d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-19T17:22:06.645656Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"warn","ts":"2024-08-19T17:22:06.645736Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d","error":"context canceled"}
	{"level":"info","ts":"2024-08-19T17:22:06.645806Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b2e12d85c3b1f69e","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:22:06.645823Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"info","ts":"2024-08-19T17:22:06.645835Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"b2e12d85c3b1f69e","removed-remote-peer-id":"9ad796c8c4abed8d"}
	{"level":"warn","ts":"2024-08-19T17:22:06.659250Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b2e12d85c3b1f69e","remote-peer-id-stream-handler":"b2e12d85c3b1f69e","remote-peer-id-from":"9ad796c8c4abed8d"}
	{"level":"warn","ts":"2024-08-19T17:22:06.663575Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.95:34874","server-name":"","error":"read tcp 192.168.39.205:2380->192.168.39.95:34874: read: connection reset by peer"}
	
	
	==> kernel <==
	 17:24:41 up 15 min,  0 users,  load average: 0.19, 0.52, 0.36
	Linux ha-227346 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0210787eef3fd90724e576db86b7f9845e697c810902046d314cce9cd6155cec] <==
	I0819 17:19:14.539758       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0819 17:19:14.540043       1 main.go:139] hostIP = 192.168.39.205
	podIP = 192.168.39.205
	I0819 17:19:14.548236       1 main.go:148] setting mtu 1500 for CNI 
	I0819 17:19:14.548266       1 main.go:178] kindnetd IP family: "ipv4"
	I0819 17:19:14.548282       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0819 17:19:15.164271       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	
	
	==> kindnet [b3d8e85b57f15ba9de3964a060b9fc3d80e63cd200db7ae096c327c72548927f] <==
	I0819 17:24:00.116999       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:24:10.110260       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:24:10.110323       1 main.go:299] handling current node
	I0819 17:24:10.110341       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:24:10.110346       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:24:10.110471       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:24:10.110487       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:24:20.117231       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:24:20.117396       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:24:20.117567       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:24:20.117591       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:24:20.117661       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:24:20.117683       1 main.go:299] handling current node
	I0819 17:24:30.107603       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:24:30.107677       1 main.go:299] handling current node
	I0819 17:24:30.107702       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:24:30.107711       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	I0819 17:24:30.107907       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:24:30.107935       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:24:40.110427       1 main.go:295] Handling node with IPs: map[192.168.39.96:{}]
	I0819 17:24:40.110516       1 main.go:322] Node ha-227346-m04 has CIDR [10.244.3.0/24] 
	I0819 17:24:40.110756       1 main.go:295] Handling node with IPs: map[192.168.39.205:{}]
	I0819 17:24:40.110780       1 main.go:299] handling current node
	I0819 17:24:40.110804       1 main.go:295] Handling node with IPs: map[192.168.39.189:{}]
	I0819 17:24:40.110810       1 main.go:322] Node ha-227346-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2e0b325ce6a57084c4ab883d9d95a489c2a9802412ad408dd356f6aebf08666c] <==
	I0819 17:19:29.645536       1 options.go:228] external host was not specified, using 192.168.39.205
	I0819 17:19:29.647470       1 server.go:142] Version: v1.31.0
	I0819 17:19:29.647548       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:19:30.271705       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 17:19:30.287225       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:19:30.290029       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 17:19:30.290185       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 17:19:30.290482       1 instance.go:232] Using reconciler: lease
	W0819 17:19:50.270484       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0819 17:19:50.270484       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0819 17:19:50.291206       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [fc00ac73decf51171a36e73e17f220e2d500056daab06c38c1dcf6d67e012c8f] <==
	I0819 17:20:14.630125       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:20:14.630161       1 policy_source.go:224] refreshing policies
	I0819 17:20:14.643997       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 17:20:14.683147       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 17:20:14.683344       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 17:20:14.683425       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 17:20:14.683795       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 17:20:14.684309       1 aggregator.go:171] initial CRD sync complete...
	I0819 17:20:14.684807       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 17:20:14.685042       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 17:20:14.685157       1 cache.go:39] Caches are synced for autoregister controller
	I0819 17:20:14.685484       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 17:20:14.685685       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 17:20:14.687419       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 17:20:14.689146       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 17:20:14.692900       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0819 17:20:14.710743       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.189 192.168.39.95]
	I0819 17:20:14.713417       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:20:14.720281       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 17:20:14.724894       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 17:20:14.731579       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 17:20:15.592852       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 17:20:15.946773       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.189 192.168.39.205 192.168.39.95]
	W0819 17:20:26.080386       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.189 192.168.39.205]
	W0819 17:22:15.951167       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.189 192.168.39.205]
	
	
	==> kube-controller-manager [0bdf151e1296e20d68c07590c24fdf6c369d5b2edd5612d0f0041e4650e04f3b] <==
	I0819 17:19:30.095204       1 serving.go:386] Generated self-signed cert in-memory
	I0819 17:19:30.607687       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 17:19:30.607780       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:19:30.609610       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 17:19:30.609816       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 17:19:30.610361       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 17:19:30.610454       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 17:19:51.297230       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.205:8443/healthz\": dial tcp 192.168.39.205:8443: connect: connection refused"
	
	
	==> kube-controller-manager [7697b63732dd2bc3cdd85bac53ba8e9f05cc44d05bb9fa51923140276ad47ed6] <==
	I0819 17:22:59.047087       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"a29d2a94-086e-4aca-b7bb-913f98a9477b", APIVersion:"v1", ResourceVersion:"242", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-q7wdj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-q7wdj": the object has been modified; please apply your changes to the latest version and try again
	I0819 17:22:59.071365       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-q7wdj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-q7wdj\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 17:22:59.071419       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"a29d2a94-086e-4aca-b7bb-913f98a9477b", APIVersion:"v1", ResourceVersion:"242", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-q7wdj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-q7wdj": the object has been modified; please apply your changes to the latest version and try again
	I0819 17:22:59.078247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="36.357318ms"
	I0819 17:22:59.078352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="64.434µs"
	I0819 17:22:59.109400       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="15.632337ms"
	I0819 17:22:59.109612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="77.089µs"
	I0819 17:22:59.118238       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-227346-m03"
	I0819 17:22:59.118271       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-227346-m03"
	I0819 17:22:59.157276       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-227346-m03"
	I0819 17:22:59.157379       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-227346-m03"
	I0819 17:22:59.183244       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-227346-m03"
	I0819 17:23:05.377618       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346"
	I0819 17:23:08.953222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346"
	I0819 17:23:31.050973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346"
	I0819 17:23:31.078393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346"
	I0819 17:23:31.170301       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-q7wdj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-q7wdj\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 17:23:31.170652       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"a29d2a94-086e-4aca-b7bb-913f98a9477b", APIVersion:"v1", ResourceVersion:"242", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-q7wdj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-q7wdj": the object has been modified; please apply your changes to the latest version and try again
	I0819 17:23:31.305546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="157.182541ms"
	E0819 17:23:31.308678       1 replica_set.go:560] "Unhandled Error" err="sync \"kube-system/coredns-6f6b679f8f\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-6f6b679f8f\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0819 17:23:31.306181       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"a29d2a94-086e-4aca-b7bb-913f98a9477b", APIVersion:"v1", ResourceVersion:"242", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-q7wdj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-q7wdj": the object has been modified; please apply your changes to the latest version and try again
	I0819 17:23:31.305801       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-q7wdj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-q7wdj\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 17:23:31.311373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="125.401µs"
	I0819 17:23:31.316772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="92.882µs"
	I0819 17:23:34.063397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-227346"
	
	
	==> kube-proxy [49db2955c575381cfdfdc12a855fd70c06b7450b4ac4ddd6554400f704ca9264] <==
	E0819 17:20:12.993558       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-227346\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0819 17:20:12.993614       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0819 17:20:12.993717       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:20:13.071379       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:20:13.071486       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:20:13.071547       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:20:13.076311       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:20:13.076684       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:20:13.076710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:20:13.079675       1 config.go:197] "Starting service config controller"
	I0819 17:20:13.079755       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:20:13.079798       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:20:13.079817       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:20:13.080665       1 config.go:326] "Starting node config controller"
	I0819 17:20:13.080692       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0819 17:20:16.065596       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0819 17:20:16.073304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-227346&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 17:20:16.076340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-227346&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:20:16.073454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 17:20:16.076383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0819 17:20:16.073527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0819 17:20:16.076406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0819 17:20:17.081006       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:20:17.280894       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:20:17.380088       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [7cce81e6b7d5761188c2f6f989f7908df07a132a8da33f267907f08438d7d422] <==
	
	
	==> kube-scheduler [2c9d9b1537d366e3b8e2561c9d51eff1f0c64679834e97966f28d9d561cfdd5d] <==
	
	
	==> kube-scheduler [359d51dcc978ce64dc588791333352138d33598730fd0c21fd62e54c1e9d833b] <==
	W0819 17:20:07.467707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.205:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:07.467778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.205:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:07.557994       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.205:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:07.558217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.205:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:08.168198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.205:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:08.168255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.205:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:08.796608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.205:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:08.796691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.205:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:08.939393       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.205:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:08.939456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.205:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:10.496651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.205:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:10.496787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.205:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:10.526497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.205:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:10.526604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.205:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:10.698428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.205:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:10.698565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.205:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:11.364587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.205:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.205:8443: connect: connection refused
	E0819 17:20:11.364717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.205:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.205:8443: connect: connection refused" logger="UnhandledError"
	W0819 17:20:14.633903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:20:14.634026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:20:30.107282       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:22:03.339357       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zjvnd\": pod busybox-7dff88458-zjvnd is already assigned to node \"ha-227346-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-zjvnd" node="ha-227346-m04"
	E0819 17:22:03.339724       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 01918061-912e-4677-b687-c97fb1c14a7d(default/busybox-7dff88458-zjvnd) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-zjvnd"
	E0819 17:22:03.342132       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-zjvnd\": pod busybox-7dff88458-zjvnd is already assigned to node \"ha-227346-m04\"" pod="default/busybox-7dff88458-zjvnd"
	I0819 17:22:03.342296       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-zjvnd" node="ha-227346-m04"
	
	
	==> kubelet <==
	Aug 19 17:23:31 ha-227346 kubelet[1301]: E0819 17:23:31.201534    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088211201036012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:23:41 ha-227346 kubelet[1301]: E0819 17:23:41.005205    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:23:41 ha-227346 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:23:41 ha-227346 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:23:41 ha-227346 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:23:41 ha-227346 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:23:41 ha-227346 kubelet[1301]: E0819 17:23:41.204328    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088221203949877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:23:41 ha-227346 kubelet[1301]: E0819 17:23:41.204365    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088221203949877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:23:51 ha-227346 kubelet[1301]: E0819 17:23:51.206884    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088231206454808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:23:51 ha-227346 kubelet[1301]: E0819 17:23:51.206989    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088231206454808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:24:01 ha-227346 kubelet[1301]: E0819 17:24:01.209480    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088241208337265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:24:01 ha-227346 kubelet[1301]: E0819 17:24:01.210251    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088241208337265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:24:11 ha-227346 kubelet[1301]: E0819 17:24:11.214021    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088251213586108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:24:11 ha-227346 kubelet[1301]: E0819 17:24:11.214406    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088251213586108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:24:21 ha-227346 kubelet[1301]: E0819 17:24:21.216817    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088261216158122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:24:21 ha-227346 kubelet[1301]: E0819 17:24:21.216852    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088261216158122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:24:31 ha-227346 kubelet[1301]: E0819 17:24:31.219150    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088271218645344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:24:31 ha-227346 kubelet[1301]: E0819 17:24:31.219177    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088271218645344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:24:41 ha-227346 kubelet[1301]: E0819 17:24:41.011749    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:24:41 ha-227346 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:24:41 ha-227346 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:24:41 ha-227346 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:24:41 ha-227346 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:24:41 ha-227346 kubelet[1301]: E0819 17:24:41.223010    1301 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088281222692575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:24:41 ha-227346 kubelet[1301]: E0819 17:24:41.223042    1301 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724088281222692575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146706,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 17:24:40.167466   36946 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19478-10654/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-227346 -n ha-227346
helpers_test.go:261: (dbg) Run:  kubectl --context ha-227346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (325.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-188752
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-188752
E0819 17:38:15.960952   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:38:24.332779   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-188752: exit status 82 (2m1.768859967s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-188752-m03"  ...
	* Stopping node "multinode-188752-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-188752" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-188752 --wait=true -v=8 --alsologtostderr
E0819 17:40:21.263111   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-188752 --wait=true -v=8 --alsologtostderr: (3m21.25345053s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-188752
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-188752 -n multinode-188752
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-188752 logs -n 25: (1.428914695s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m02:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2485370709/001/cp-test_multinode-188752-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m02:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752:/home/docker/cp-test_multinode-188752-m02_multinode-188752.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n multinode-188752 sudo cat                                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /home/docker/cp-test_multinode-188752-m02_multinode-188752.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m02:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03:/home/docker/cp-test_multinode-188752-m02_multinode-188752-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n multinode-188752-m03 sudo cat                                   | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /home/docker/cp-test_multinode-188752-m02_multinode-188752-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp testdata/cp-test.txt                                                | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m03:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2485370709/001/cp-test_multinode-188752-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m03:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752:/home/docker/cp-test_multinode-188752-m03_multinode-188752.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n multinode-188752 sudo cat                                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /home/docker/cp-test_multinode-188752-m03_multinode-188752.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m03:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m02:/home/docker/cp-test_multinode-188752-m03_multinode-188752-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n multinode-188752-m02 sudo cat                                   | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /home/docker/cp-test_multinode-188752-m03_multinode-188752-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-188752 node stop m03                                                          | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	| node    | multinode-188752 node start                                                             | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:37 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-188752                                                                | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:37 UTC |                     |
	| stop    | -p multinode-188752                                                                     | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:37 UTC |                     |
	| start   | -p multinode-188752                                                                     | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:39 UTC | 19 Aug 24 17:42 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-188752                                                                | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:42 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:39:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:39:21.965809   45795 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:39:21.965910   45795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:39:21.965917   45795 out.go:358] Setting ErrFile to fd 2...
	I0819 17:39:21.965922   45795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:39:21.966090   45795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:39:21.966623   45795 out.go:352] Setting JSON to false
	I0819 17:39:21.967530   45795 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4907,"bootTime":1724084255,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:39:21.967589   45795 start.go:139] virtualization: kvm guest
	I0819 17:39:21.970187   45795 out.go:177] * [multinode-188752] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:39:21.971750   45795 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:39:21.971747   45795 notify.go:220] Checking for updates...
	I0819 17:39:21.974537   45795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:39:21.975877   45795 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:39:21.977146   45795 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:39:21.978467   45795 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:39:21.979760   45795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:39:21.981406   45795 config.go:182] Loaded profile config "multinode-188752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:39:21.981515   45795 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:39:21.981931   45795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:39:21.981982   45795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:39:21.997332   45795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I0819 17:39:21.997750   45795 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:39:21.998269   45795 main.go:141] libmachine: Using API Version  1
	I0819 17:39:21.998292   45795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:39:21.998592   45795 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:39:21.998787   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:39:22.035526   45795 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 17:39:22.036816   45795 start.go:297] selected driver: kvm2
	I0819 17:39:22.036842   45795 start.go:901] validating driver "kvm2" against &{Name:multinode-188752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-188752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:39:22.036969   45795 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:39:22.037264   45795 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:39:22.037337   45795 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:39:22.052193   45795 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:39:22.052975   45795 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:39:22.053036   45795 cni.go:84] Creating CNI manager for ""
	I0819 17:39:22.053047   45795 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 17:39:22.053101   45795 start.go:340] cluster config:
	{Name:multinode-188752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-188752 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:39:22.053241   45795 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:39:22.055954   45795 out.go:177] * Starting "multinode-188752" primary control-plane node in "multinode-188752" cluster
	I0819 17:39:22.057331   45795 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:39:22.057387   45795 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:39:22.057397   45795 cache.go:56] Caching tarball of preloaded images
	I0819 17:39:22.057471   45795 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:39:22.057481   45795 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:39:22.057589   45795 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/config.json ...
	I0819 17:39:22.057790   45795 start.go:360] acquireMachinesLock for multinode-188752: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:39:22.057831   45795 start.go:364] duration metric: took 23.444µs to acquireMachinesLock for "multinode-188752"
	I0819 17:39:22.057849   45795 start.go:96] Skipping create...Using existing machine configuration
	I0819 17:39:22.057860   45795 fix.go:54] fixHost starting: 
	I0819 17:39:22.058105   45795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:39:22.058133   45795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:39:22.072213   45795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0819 17:39:22.072703   45795 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:39:22.073363   45795 main.go:141] libmachine: Using API Version  1
	I0819 17:39:22.073389   45795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:39:22.073737   45795 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:39:22.073912   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:39:22.074078   45795 main.go:141] libmachine: (multinode-188752) Calling .GetState
	I0819 17:39:22.075759   45795 fix.go:112] recreateIfNeeded on multinode-188752: state=Running err=<nil>
	W0819 17:39:22.075777   45795 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 17:39:22.077576   45795 out.go:177] * Updating the running kvm2 "multinode-188752" VM ...
	I0819 17:39:22.078867   45795 machine.go:93] provisionDockerMachine start ...
	I0819 17:39:22.078885   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:39:22.079102   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.081607   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.082078   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.082110   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.082364   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:39:22.082546   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.082730   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.082876   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:39:22.083061   45795 main.go:141] libmachine: Using SSH client type: native
	I0819 17:39:22.083233   45795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0819 17:39:22.083244   45795 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 17:39:22.194515   45795 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-188752
	
	I0819 17:39:22.194548   45795 main.go:141] libmachine: (multinode-188752) Calling .GetMachineName
	I0819 17:39:22.194819   45795 buildroot.go:166] provisioning hostname "multinode-188752"
	I0819 17:39:22.194843   45795 main.go:141] libmachine: (multinode-188752) Calling .GetMachineName
	I0819 17:39:22.195052   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.197662   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.198070   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.198096   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.198229   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:39:22.198400   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.198538   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.198694   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:39:22.198812   45795 main.go:141] libmachine: Using SSH client type: native
	I0819 17:39:22.199017   45795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0819 17:39:22.199033   45795 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-188752 && echo "multinode-188752" | sudo tee /etc/hostname
	I0819 17:39:22.326653   45795 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-188752
	
	I0819 17:39:22.326683   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.329789   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.330145   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.330183   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.330356   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:39:22.330551   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.330735   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.330875   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:39:22.331081   45795 main.go:141] libmachine: Using SSH client type: native
	I0819 17:39:22.331251   45795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0819 17:39:22.331267   45795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-188752' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-188752/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-188752' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:39:22.437774   45795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:39:22.437798   45795 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:39:22.437821   45795 buildroot.go:174] setting up certificates
	I0819 17:39:22.437836   45795 provision.go:84] configureAuth start
	I0819 17:39:22.437847   45795 main.go:141] libmachine: (multinode-188752) Calling .GetMachineName
	I0819 17:39:22.438103   45795 main.go:141] libmachine: (multinode-188752) Calling .GetIP
	I0819 17:39:22.440771   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.441113   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.441139   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.441279   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.443362   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.443668   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.443698   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.443758   45795 provision.go:143] copyHostCerts
	I0819 17:39:22.443784   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:39:22.443831   45795 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:39:22.443852   45795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:39:22.443931   45795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:39:22.444032   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:39:22.444054   45795 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:39:22.444060   45795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:39:22.444099   45795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:39:22.444166   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:39:22.444189   45795 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:39:22.444195   45795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:39:22.444224   45795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:39:22.444290   45795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.multinode-188752 san=[127.0.0.1 192.168.39.69 localhost minikube multinode-188752]
	I0819 17:39:22.547367   45795 provision.go:177] copyRemoteCerts
	I0819 17:39:22.547427   45795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:39:22.547447   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.550340   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.550702   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.550732   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.550882   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:39:22.551084   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.551232   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:39:22.551385   45795 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/multinode-188752/id_rsa Username:docker}
	I0819 17:39:22.634395   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 17:39:22.634455   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:39:22.658822   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 17:39:22.658902   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 17:39:22.682120   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 17:39:22.682172   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 17:39:22.705123   45795 provision.go:87] duration metric: took 267.275084ms to configureAuth
	I0819 17:39:22.705151   45795 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:39:22.705360   45795 config.go:182] Loaded profile config "multinode-188752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:39:22.705425   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.708059   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.708469   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.708490   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.708675   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:39:22.708866   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.709007   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.709176   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:39:22.709344   45795 main.go:141] libmachine: Using SSH client type: native
	I0819 17:39:22.709538   45795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0819 17:39:22.709554   45795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:40:53.432004   45795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:40:53.432037   45795 machine.go:96] duration metric: took 1m31.353159045s to provisionDockerMachine
	I0819 17:40:53.432049   45795 start.go:293] postStartSetup for "multinode-188752" (driver="kvm2")
	I0819 17:40:53.432070   45795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:40:53.432085   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:40:53.432413   45795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:40:53.432444   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:40:53.435583   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.436112   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:53.436140   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.436299   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:40:53.436520   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:40:53.436686   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:40:53.436842   45795 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/multinode-188752/id_rsa Username:docker}
	I0819 17:40:53.522157   45795 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:40:53.526128   45795 command_runner.go:130] > NAME=Buildroot
	I0819 17:40:53.526156   45795 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 17:40:53.526164   45795 command_runner.go:130] > ID=buildroot
	I0819 17:40:53.526173   45795 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 17:40:53.526183   45795 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 17:40:53.526224   45795 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:40:53.526238   45795 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:40:53.526314   45795 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:40:53.526411   45795 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:40:53.526423   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /etc/ssl/certs/178372.pem
	I0819 17:40:53.526515   45795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:40:53.535624   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:40:53.558181   45795 start.go:296] duration metric: took 126.118468ms for postStartSetup
	I0819 17:40:53.558234   45795 fix.go:56] duration metric: took 1m31.500376025s for fixHost
	I0819 17:40:53.558260   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:40:53.561123   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.561559   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:53.561589   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.561743   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:40:53.561928   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:40:53.562130   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:40:53.562255   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:40:53.562428   45795 main.go:141] libmachine: Using SSH client type: native
	I0819 17:40:53.562630   45795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0819 17:40:53.562642   45795 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:40:53.669279   45795 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089253.640714702
	
	I0819 17:40:53.669303   45795 fix.go:216] guest clock: 1724089253.640714702
	I0819 17:40:53.669311   45795 fix.go:229] Guest: 2024-08-19 17:40:53.640714702 +0000 UTC Remote: 2024-08-19 17:40:53.558239836 +0000 UTC m=+91.626880087 (delta=82.474866ms)
	I0819 17:40:53.669346   45795 fix.go:200] guest clock delta is within tolerance: 82.474866ms
	I0819 17:40:53.669352   45795 start.go:83] releasing machines lock for "multinode-188752", held for 1m31.611511852s
	I0819 17:40:53.669369   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:40:53.669629   45795 main.go:141] libmachine: (multinode-188752) Calling .GetIP
	I0819 17:40:53.672342   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.672675   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:53.672722   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.672897   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:40:53.673450   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:40:53.673631   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:40:53.673746   45795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:40:53.673792   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:40:53.673834   45795 ssh_runner.go:195] Run: cat /version.json
	I0819 17:40:53.673857   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:40:53.676393   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.676690   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.676727   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:53.676782   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.676923   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:40:53.677091   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:40:53.677226   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:40:53.677225   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:53.677283   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.677395   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:40:53.677392   45795 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/multinode-188752/id_rsa Username:docker}
	I0819 17:40:53.677563   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:40:53.677702   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:40:53.677832   45795 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/multinode-188752/id_rsa Username:docker}
	I0819 17:40:53.796698   45795 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 17:40:53.797400   45795 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 17:40:53.797547   45795 ssh_runner.go:195] Run: systemctl --version
	I0819 17:40:53.803318   45795 command_runner.go:130] > systemd 252 (252)
	I0819 17:40:53.803350   45795 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 17:40:53.803527   45795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:40:53.958940   45795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 17:40:53.965837   45795 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 17:40:53.966107   45795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:40:53.966176   45795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:40:53.974992   45795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 17:40:53.975015   45795 start.go:495] detecting cgroup driver to use...
	I0819 17:40:53.975077   45795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:40:53.994417   45795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:40:54.008704   45795 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:40:54.008793   45795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:40:54.022816   45795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:40:54.036949   45795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:40:54.187693   45795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:40:54.324523   45795 docker.go:233] disabling docker service ...
	I0819 17:40:54.324604   45795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:40:54.340685   45795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:40:54.353740   45795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:40:54.487839   45795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:40:54.622892   45795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:40:54.635983   45795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:40:54.653695   45795 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 17:40:54.653747   45795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:40:54.653797   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.663925   45795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:40:54.664010   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.673859   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.684317   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.695144   45795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:40:54.705022   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.714693   45795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.725243   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.735241   45795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:40:54.745413   45795 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 17:40:54.745484   45795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:40:54.755251   45795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:40:54.886710   45795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:40:56.414632   45795 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.527883252s)
	I0819 17:40:56.414668   45795 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:40:56.414718   45795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:40:56.419013   45795 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 17:40:56.419034   45795 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 17:40:56.419040   45795 command_runner.go:130] > Device: 0,22	Inode: 1348        Links: 1
	I0819 17:40:56.419047   45795 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 17:40:56.419052   45795 command_runner.go:130] > Access: 2024-08-19 17:40:56.285457337 +0000
	I0819 17:40:56.419058   45795 command_runner.go:130] > Modify: 2024-08-19 17:40:56.285457337 +0000
	I0819 17:40:56.419062   45795 command_runner.go:130] > Change: 2024-08-19 17:40:56.285457337 +0000
	I0819 17:40:56.419066   45795 command_runner.go:130] >  Birth: -
	I0819 17:40:56.419107   45795 start.go:563] Will wait 60s for crictl version
	I0819 17:40:56.419162   45795 ssh_runner.go:195] Run: which crictl
	I0819 17:40:56.422534   45795 command_runner.go:130] > /usr/bin/crictl
	I0819 17:40:56.422647   45795 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:40:56.455570   45795 command_runner.go:130] > Version:  0.1.0
	I0819 17:40:56.455595   45795 command_runner.go:130] > RuntimeName:  cri-o
	I0819 17:40:56.455600   45795 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 17:40:56.455605   45795 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 17:40:56.456641   45795 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:40:56.456721   45795 ssh_runner.go:195] Run: crio --version
	I0819 17:40:56.484584   45795 command_runner.go:130] > crio version 1.29.1
	I0819 17:40:56.484605   45795 command_runner.go:130] > Version:        1.29.1
	I0819 17:40:56.484612   45795 command_runner.go:130] > GitCommit:      unknown
	I0819 17:40:56.484619   45795 command_runner.go:130] > GitCommitDate:  unknown
	I0819 17:40:56.484625   45795 command_runner.go:130] > GitTreeState:   clean
	I0819 17:40:56.484632   45795 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 17:40:56.484639   45795 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 17:40:56.484645   45795 command_runner.go:130] > Compiler:       gc
	I0819 17:40:56.484652   45795 command_runner.go:130] > Platform:       linux/amd64
	I0819 17:40:56.484658   45795 command_runner.go:130] > Linkmode:       dynamic
	I0819 17:40:56.484669   45795 command_runner.go:130] > BuildTags:      
	I0819 17:40:56.484676   45795 command_runner.go:130] >   containers_image_ostree_stub
	I0819 17:40:56.484680   45795 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 17:40:56.484684   45795 command_runner.go:130] >   btrfs_noversion
	I0819 17:40:56.484689   45795 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 17:40:56.484697   45795 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 17:40:56.484701   45795 command_runner.go:130] >   seccomp
	I0819 17:40:56.484705   45795 command_runner.go:130] > LDFlags:          unknown
	I0819 17:40:56.484710   45795 command_runner.go:130] > SeccompEnabled:   true
	I0819 17:40:56.484714   45795 command_runner.go:130] > AppArmorEnabled:  false
	I0819 17:40:56.484796   45795 ssh_runner.go:195] Run: crio --version
	I0819 17:40:56.510072   45795 command_runner.go:130] > crio version 1.29.1
	I0819 17:40:56.510092   45795 command_runner.go:130] > Version:        1.29.1
	I0819 17:40:56.510098   45795 command_runner.go:130] > GitCommit:      unknown
	I0819 17:40:56.510102   45795 command_runner.go:130] > GitCommitDate:  unknown
	I0819 17:40:56.510106   45795 command_runner.go:130] > GitTreeState:   clean
	I0819 17:40:56.510112   45795 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 17:40:56.510115   45795 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 17:40:56.510119   45795 command_runner.go:130] > Compiler:       gc
	I0819 17:40:56.510124   45795 command_runner.go:130] > Platform:       linux/amd64
	I0819 17:40:56.510128   45795 command_runner.go:130] > Linkmode:       dynamic
	I0819 17:40:56.510145   45795 command_runner.go:130] > BuildTags:      
	I0819 17:40:56.510151   45795 command_runner.go:130] >   containers_image_ostree_stub
	I0819 17:40:56.510156   45795 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 17:40:56.510162   45795 command_runner.go:130] >   btrfs_noversion
	I0819 17:40:56.510167   45795 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 17:40:56.510171   45795 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 17:40:56.510175   45795 command_runner.go:130] >   seccomp
	I0819 17:40:56.510179   45795 command_runner.go:130] > LDFlags:          unknown
	I0819 17:40:56.510182   45795 command_runner.go:130] > SeccompEnabled:   true
	I0819 17:40:56.510189   45795 command_runner.go:130] > AppArmorEnabled:  false
	I0819 17:40:56.513324   45795 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:40:56.514662   45795 main.go:141] libmachine: (multinode-188752) Calling .GetIP
	I0819 17:40:56.517552   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:56.517909   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:56.517933   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:56.518162   45795 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:40:56.521925   45795 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 17:40:56.522022   45795 kubeadm.go:883] updating cluster {Name:multinode-188752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-188752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:40:56.522170   45795 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:40:56.522216   45795 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:40:56.565579   45795 command_runner.go:130] > {
	I0819 17:40:56.565606   45795 command_runner.go:130] >   "images": [
	I0819 17:40:56.565618   45795 command_runner.go:130] >     {
	I0819 17:40:56.565627   45795 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 17:40:56.565632   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.565644   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 17:40:56.565652   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565660   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.565676   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 17:40:56.565691   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 17:40:56.565696   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565702   45795 command_runner.go:130] >       "size": "87165492",
	I0819 17:40:56.565705   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.565710   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.565715   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.565720   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.565724   45795 command_runner.go:130] >     },
	I0819 17:40:56.565729   45795 command_runner.go:130] >     {
	I0819 17:40:56.565739   45795 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 17:40:56.565752   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.565762   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 17:40:56.565771   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565779   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.565795   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 17:40:56.565806   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 17:40:56.565813   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565820   45795 command_runner.go:130] >       "size": "87190579",
	I0819 17:40:56.565831   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.565848   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.565858   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.565871   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.565880   45795 command_runner.go:130] >     },
	I0819 17:40:56.565890   45795 command_runner.go:130] >     {
	I0819 17:40:56.565901   45795 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 17:40:56.565912   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.565925   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 17:40:56.565935   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565942   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.565968   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 17:40:56.565982   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 17:40:56.565988   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565998   45795 command_runner.go:130] >       "size": "1363676",
	I0819 17:40:56.566010   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.566021   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.566036   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566044   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566053   45795 command_runner.go:130] >     },
	I0819 17:40:56.566063   45795 command_runner.go:130] >     {
	I0819 17:40:56.566074   45795 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 17:40:56.566085   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566098   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 17:40:56.566109   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566120   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566136   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 17:40:56.566159   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 17:40:56.566170   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566182   45795 command_runner.go:130] >       "size": "31470524",
	I0819 17:40:56.566192   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.566203   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.566211   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566225   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566235   45795 command_runner.go:130] >     },
	I0819 17:40:56.566244   45795 command_runner.go:130] >     {
	I0819 17:40:56.566255   45795 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 17:40:56.566266   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566279   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 17:40:56.566289   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566297   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566318   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 17:40:56.566330   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 17:40:56.566339   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566351   45795 command_runner.go:130] >       "size": "61245718",
	I0819 17:40:56.566366   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.566378   45795 command_runner.go:130] >       "username": "nonroot",
	I0819 17:40:56.566392   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566410   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566419   45795 command_runner.go:130] >     },
	I0819 17:40:56.566425   45795 command_runner.go:130] >     {
	I0819 17:40:56.566439   45795 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 17:40:56.566450   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566458   45795 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 17:40:56.566467   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566475   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566490   45795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 17:40:56.566502   45795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 17:40:56.566507   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566519   45795 command_runner.go:130] >       "size": "149009664",
	I0819 17:40:56.566529   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.566540   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.566550   45795 command_runner.go:130] >       },
	I0819 17:40:56.566560   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.566573   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566585   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566591   45795 command_runner.go:130] >     },
	I0819 17:40:56.566601   45795 command_runner.go:130] >     {
	I0819 17:40:56.566612   45795 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 17:40:56.566622   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566634   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 17:40:56.566644   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566655   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566668   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 17:40:56.566683   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 17:40:56.566693   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566700   45795 command_runner.go:130] >       "size": "95233506",
	I0819 17:40:56.566707   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.566714   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.566724   45795 command_runner.go:130] >       },
	I0819 17:40:56.566730   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.566741   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566751   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566768   45795 command_runner.go:130] >     },
	I0819 17:40:56.566778   45795 command_runner.go:130] >     {
	I0819 17:40:56.566792   45795 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 17:40:56.566802   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566812   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 17:40:56.566821   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566832   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566865   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 17:40:56.566879   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 17:40:56.566882   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566886   45795 command_runner.go:130] >       "size": "89437512",
	I0819 17:40:56.566891   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.566895   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.566901   45795 command_runner.go:130] >       },
	I0819 17:40:56.566906   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.566911   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566917   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566923   45795 command_runner.go:130] >     },
	I0819 17:40:56.566928   45795 command_runner.go:130] >     {
	I0819 17:40:56.566938   45795 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 17:40:56.566945   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566952   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 17:40:56.566959   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566965   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566977   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 17:40:56.566989   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 17:40:56.566995   45795 command_runner.go:130] >       ],
	I0819 17:40:56.567002   45795 command_runner.go:130] >       "size": "92728217",
	I0819 17:40:56.567009   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.567020   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.567031   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.567042   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.567052   45795 command_runner.go:130] >     },
	I0819 17:40:56.567062   45795 command_runner.go:130] >     {
	I0819 17:40:56.567075   45795 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 17:40:56.567083   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.567096   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 17:40:56.567103   45795 command_runner.go:130] >       ],
	I0819 17:40:56.567107   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.567117   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 17:40:56.567127   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 17:40:56.567133   45795 command_runner.go:130] >       ],
	I0819 17:40:56.567138   45795 command_runner.go:130] >       "size": "68420936",
	I0819 17:40:56.567144   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.567148   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.567154   45795 command_runner.go:130] >       },
	I0819 17:40:56.567158   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.567165   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.567169   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.567175   45795 command_runner.go:130] >     },
	I0819 17:40:56.567179   45795 command_runner.go:130] >     {
	I0819 17:40:56.567187   45795 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 17:40:56.567194   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.567199   45795 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 17:40:56.567205   45795 command_runner.go:130] >       ],
	I0819 17:40:56.567209   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.567216   45795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 17:40:56.567225   45795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 17:40:56.567231   45795 command_runner.go:130] >       ],
	I0819 17:40:56.567236   45795 command_runner.go:130] >       "size": "742080",
	I0819 17:40:56.567242   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.567246   45795 command_runner.go:130] >         "value": "65535"
	I0819 17:40:56.567253   45795 command_runner.go:130] >       },
	I0819 17:40:56.567257   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.567263   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.567267   45795 command_runner.go:130] >       "pinned": true
	I0819 17:40:56.567273   45795 command_runner.go:130] >     }
	I0819 17:40:56.567277   45795 command_runner.go:130] >   ]
	I0819 17:40:56.567283   45795 command_runner.go:130] > }
	I0819 17:40:56.567522   45795 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:40:56.567536   45795 crio.go:433] Images already preloaded, skipping extraction
	I0819 17:40:56.567614   45795 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:40:56.599944   45795 command_runner.go:130] > {
	I0819 17:40:56.599965   45795 command_runner.go:130] >   "images": [
	I0819 17:40:56.599969   45795 command_runner.go:130] >     {
	I0819 17:40:56.599980   45795 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 17:40:56.599986   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.599991   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 17:40:56.599995   45795 command_runner.go:130] >       ],
	I0819 17:40:56.599999   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600007   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 17:40:56.600014   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 17:40:56.600018   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600023   45795 command_runner.go:130] >       "size": "87165492",
	I0819 17:40:56.600027   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600034   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600040   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600045   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600048   45795 command_runner.go:130] >     },
	I0819 17:40:56.600052   45795 command_runner.go:130] >     {
	I0819 17:40:56.600058   45795 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 17:40:56.600062   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600068   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 17:40:56.600072   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600077   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600084   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 17:40:56.600093   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 17:40:56.600097   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600100   45795 command_runner.go:130] >       "size": "87190579",
	I0819 17:40:56.600104   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600111   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600123   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600129   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600133   45795 command_runner.go:130] >     },
	I0819 17:40:56.600138   45795 command_runner.go:130] >     {
	I0819 17:40:56.600144   45795 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 17:40:56.600148   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600153   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 17:40:56.600157   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600161   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600170   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 17:40:56.600178   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 17:40:56.600183   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600188   45795 command_runner.go:130] >       "size": "1363676",
	I0819 17:40:56.600194   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600198   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600206   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600213   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600217   45795 command_runner.go:130] >     },
	I0819 17:40:56.600220   45795 command_runner.go:130] >     {
	I0819 17:40:56.600225   45795 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 17:40:56.600232   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600238   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 17:40:56.600243   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600249   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600259   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 17:40:56.600273   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 17:40:56.600280   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600284   45795 command_runner.go:130] >       "size": "31470524",
	I0819 17:40:56.600290   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600294   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600300   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600304   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600309   45795 command_runner.go:130] >     },
	I0819 17:40:56.600315   45795 command_runner.go:130] >     {
	I0819 17:40:56.600326   45795 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 17:40:56.600334   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600344   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 17:40:56.600350   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600354   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600363   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 17:40:56.600373   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 17:40:56.600376   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600380   45795 command_runner.go:130] >       "size": "61245718",
	I0819 17:40:56.600384   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600388   45795 command_runner.go:130] >       "username": "nonroot",
	I0819 17:40:56.600394   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600398   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600403   45795 command_runner.go:130] >     },
	I0819 17:40:56.600407   45795 command_runner.go:130] >     {
	I0819 17:40:56.600415   45795 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 17:40:56.600421   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600426   45795 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 17:40:56.600432   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600436   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600451   45795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 17:40:56.600460   45795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 17:40:56.600466   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600470   45795 command_runner.go:130] >       "size": "149009664",
	I0819 17:40:56.600475   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.600479   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.600485   45795 command_runner.go:130] >       },
	I0819 17:40:56.600491   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600495   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600501   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600505   45795 command_runner.go:130] >     },
	I0819 17:40:56.600510   45795 command_runner.go:130] >     {
	I0819 17:40:56.600516   45795 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 17:40:56.600522   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600527   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 17:40:56.600533   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600537   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600546   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 17:40:56.600569   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 17:40:56.600576   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600581   45795 command_runner.go:130] >       "size": "95233506",
	I0819 17:40:56.600586   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.600589   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.600592   45795 command_runner.go:130] >       },
	I0819 17:40:56.600596   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600599   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600603   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600609   45795 command_runner.go:130] >     },
	I0819 17:40:56.600613   45795 command_runner.go:130] >     {
	I0819 17:40:56.600621   45795 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 17:40:56.600625   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600630   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 17:40:56.600634   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600638   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600661   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 17:40:56.600671   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 17:40:56.600677   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600681   45795 command_runner.go:130] >       "size": "89437512",
	I0819 17:40:56.600687   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.600691   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.600696   45795 command_runner.go:130] >       },
	I0819 17:40:56.600700   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600704   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600710   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600713   45795 command_runner.go:130] >     },
	I0819 17:40:56.600719   45795 command_runner.go:130] >     {
	I0819 17:40:56.600724   45795 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 17:40:56.600730   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600735   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 17:40:56.600740   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600744   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600767   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 17:40:56.600779   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 17:40:56.600784   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600793   45795 command_runner.go:130] >       "size": "92728217",
	I0819 17:40:56.600799   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600804   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600810   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600814   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600819   45795 command_runner.go:130] >     },
	I0819 17:40:56.600823   45795 command_runner.go:130] >     {
	I0819 17:40:56.600831   45795 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 17:40:56.600838   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600843   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 17:40:56.600848   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600852   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600861   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 17:40:56.600870   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 17:40:56.600875   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600879   45795 command_runner.go:130] >       "size": "68420936",
	I0819 17:40:56.600885   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.600889   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.600895   45795 command_runner.go:130] >       },
	I0819 17:40:56.600898   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600905   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600909   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600913   45795 command_runner.go:130] >     },
	I0819 17:40:56.600916   45795 command_runner.go:130] >     {
	I0819 17:40:56.600924   45795 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 17:40:56.600928   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600935   45795 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 17:40:56.600938   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600942   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600950   45795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 17:40:56.600957   45795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 17:40:56.600963   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600966   45795 command_runner.go:130] >       "size": "742080",
	I0819 17:40:56.600970   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.600974   45795 command_runner.go:130] >         "value": "65535"
	I0819 17:40:56.600977   45795 command_runner.go:130] >       },
	I0819 17:40:56.600990   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600997   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.601003   45795 command_runner.go:130] >       "pinned": true
	I0819 17:40:56.601009   45795 command_runner.go:130] >     }
	I0819 17:40:56.601017   45795 command_runner.go:130] >   ]
	I0819 17:40:56.601023   45795 command_runner.go:130] > }
	I0819 17:40:56.601151   45795 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:40:56.601163   45795 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:40:56.601170   45795 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.31.0 crio true true} ...
	I0819 17:40:56.601259   45795 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-188752 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-188752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:40:56.601323   45795 ssh_runner.go:195] Run: crio config
	I0819 17:40:56.644531   45795 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 17:40:56.644573   45795 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 17:40:56.644592   45795 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 17:40:56.644597   45795 command_runner.go:130] > #
	I0819 17:40:56.644608   45795 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 17:40:56.644618   45795 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 17:40:56.644631   45795 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 17:40:56.644648   45795 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 17:40:56.644658   45795 command_runner.go:130] > # reload'.
	I0819 17:40:56.644668   45795 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 17:40:56.644678   45795 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 17:40:56.644691   45795 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 17:40:56.644703   45795 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 17:40:56.644713   45795 command_runner.go:130] > [crio]
	I0819 17:40:56.644723   45795 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 17:40:56.644733   45795 command_runner.go:130] > # containers images, in this directory.
	I0819 17:40:56.644745   45795 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 17:40:56.644778   45795 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 17:40:56.644789   45795 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 17:40:56.644801   45795 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 17:40:56.644957   45795 command_runner.go:130] > # imagestore = ""
	I0819 17:40:56.644975   45795 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 17:40:56.644982   45795 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 17:40:56.645063   45795 command_runner.go:130] > storage_driver = "overlay"
	I0819 17:40:56.645093   45795 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 17:40:56.645106   45795 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 17:40:56.645115   45795 command_runner.go:130] > storage_option = [
	I0819 17:40:56.645438   45795 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 17:40:56.645446   45795 command_runner.go:130] > ]
	I0819 17:40:56.645452   45795 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 17:40:56.645467   45795 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 17:40:56.645477   45795 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 17:40:56.645486   45795 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 17:40:56.645498   45795 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 17:40:56.645505   45795 command_runner.go:130] > # always happen on a node reboot
	I0819 17:40:56.645510   45795 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 17:40:56.645537   45795 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 17:40:56.645547   45795 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 17:40:56.645556   45795 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 17:40:56.645567   45795 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 17:40:56.645579   45795 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 17:40:56.645594   45795 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 17:40:56.645603   45795 command_runner.go:130] > # internal_wipe = true
	I0819 17:40:56.645618   45795 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 17:40:56.645636   45795 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 17:40:56.645643   45795 command_runner.go:130] > # internal_repair = false
	I0819 17:40:56.645648   45795 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 17:40:56.645656   45795 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 17:40:56.645661   45795 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 17:40:56.645667   45795 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 17:40:56.645672   45795 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 17:40:56.645679   45795 command_runner.go:130] > [crio.api]
	I0819 17:40:56.645685   45795 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 17:40:56.645695   45795 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 17:40:56.645703   45795 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 17:40:56.645713   45795 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 17:40:56.645723   45795 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 17:40:56.645734   45795 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 17:40:56.645743   45795 command_runner.go:130] > # stream_port = "0"
	I0819 17:40:56.645752   45795 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 17:40:56.645759   45795 command_runner.go:130] > # stream_enable_tls = false
	I0819 17:40:56.645765   45795 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 17:40:56.645771   45795 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 17:40:56.645777   45795 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 17:40:56.645785   45795 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 17:40:56.645789   45795 command_runner.go:130] > # minutes.
	I0819 17:40:56.645795   45795 command_runner.go:130] > # stream_tls_cert = ""
	I0819 17:40:56.645808   45795 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 17:40:56.645822   45795 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 17:40:56.645828   45795 command_runner.go:130] > # stream_tls_key = ""
	I0819 17:40:56.645837   45795 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 17:40:56.645850   45795 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 17:40:56.645874   45795 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 17:40:56.645883   45795 command_runner.go:130] > # stream_tls_ca = ""
	I0819 17:40:56.645898   45795 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 17:40:56.645908   45795 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 17:40:56.645921   45795 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 17:40:56.645932   45795 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 17:40:56.645942   45795 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 17:40:56.645953   45795 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 17:40:56.645968   45795 command_runner.go:130] > [crio.runtime]
	I0819 17:40:56.645977   45795 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 17:40:56.645983   45795 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 17:40:56.645990   45795 command_runner.go:130] > # "nofile=1024:2048"
	I0819 17:40:56.645999   45795 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 17:40:56.646008   45795 command_runner.go:130] > # default_ulimits = [
	I0819 17:40:56.646014   45795 command_runner.go:130] > # ]
	I0819 17:40:56.646027   45795 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 17:40:56.646035   45795 command_runner.go:130] > # no_pivot = false
	I0819 17:40:56.646044   45795 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 17:40:56.646056   45795 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 17:40:56.646069   45795 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 17:40:56.646077   45795 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 17:40:56.646088   45795 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 17:40:56.646106   45795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 17:40:56.646116   45795 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 17:40:56.646126   45795 command_runner.go:130] > # Cgroup setting for conmon
	I0819 17:40:56.646137   45795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 17:40:56.646147   45795 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 17:40:56.646157   45795 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 17:40:56.646164   45795 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 17:40:56.646172   45795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 17:40:56.646181   45795 command_runner.go:130] > conmon_env = [
	I0819 17:40:56.646190   45795 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 17:40:56.646201   45795 command_runner.go:130] > ]
	I0819 17:40:56.646209   45795 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 17:40:56.646221   45795 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 17:40:56.646232   45795 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 17:40:56.646240   45795 command_runner.go:130] > # default_env = [
	I0819 17:40:56.646246   45795 command_runner.go:130] > # ]
	I0819 17:40:56.646257   45795 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 17:40:56.646271   45795 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 17:40:56.646280   45795 command_runner.go:130] > # selinux = false
	I0819 17:40:56.646289   45795 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 17:40:56.646302   45795 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 17:40:56.646319   45795 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 17:40:56.646337   45795 command_runner.go:130] > # seccomp_profile = ""
	I0819 17:40:56.646349   45795 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 17:40:56.646361   45795 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 17:40:56.646373   45795 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 17:40:56.646384   45795 command_runner.go:130] > # which might increase security.
	I0819 17:40:56.646392   45795 command_runner.go:130] > # This option is currently deprecated,
	I0819 17:40:56.646403   45795 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 17:40:56.646413   45795 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 17:40:56.646423   45795 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 17:40:56.646439   45795 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 17:40:56.646455   45795 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 17:40:56.646510   45795 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 17:40:56.646537   45795 command_runner.go:130] > # This option supports live configuration reload.
	I0819 17:40:56.646546   45795 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 17:40:56.646559   45795 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 17:40:56.646571   45795 command_runner.go:130] > # the cgroup blockio controller.
	I0819 17:40:56.646580   45795 command_runner.go:130] > # blockio_config_file = ""
	I0819 17:40:56.646591   45795 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 17:40:56.646602   45795 command_runner.go:130] > # blockio parameters.
	I0819 17:40:56.646615   45795 command_runner.go:130] > # blockio_reload = false
	I0819 17:40:56.646626   45795 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 17:40:56.646634   45795 command_runner.go:130] > # irqbalance daemon.
	I0819 17:40:56.646646   45795 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 17:40:56.646660   45795 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 17:40:56.646678   45795 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 17:40:56.646690   45795 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 17:40:56.646711   45795 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 17:40:56.646724   45795 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 17:40:56.646736   45795 command_runner.go:130] > # This option supports live configuration reload.
	I0819 17:40:56.646748   45795 command_runner.go:130] > # rdt_config_file = ""
	I0819 17:40:56.646761   45795 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 17:40:56.646773   45795 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 17:40:56.646819   45795 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 17:40:56.646831   45795 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 17:40:56.646846   45795 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 17:40:56.646857   45795 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 17:40:56.646875   45795 command_runner.go:130] > # will be added.
	I0819 17:40:56.646887   45795 command_runner.go:130] > # default_capabilities = [
	I0819 17:40:56.646897   45795 command_runner.go:130] > # 	"CHOWN",
	I0819 17:40:56.646905   45795 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 17:40:56.646915   45795 command_runner.go:130] > # 	"FSETID",
	I0819 17:40:56.646922   45795 command_runner.go:130] > # 	"FOWNER",
	I0819 17:40:56.646932   45795 command_runner.go:130] > # 	"SETGID",
	I0819 17:40:56.646944   45795 command_runner.go:130] > # 	"SETUID",
	I0819 17:40:56.646955   45795 command_runner.go:130] > # 	"SETPCAP",
	I0819 17:40:56.646963   45795 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 17:40:56.646973   45795 command_runner.go:130] > # 	"KILL",
	I0819 17:40:56.646980   45795 command_runner.go:130] > # ]
	I0819 17:40:56.646993   45795 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 17:40:56.647006   45795 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 17:40:56.647017   45795 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 17:40:56.647032   45795 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 17:40:56.647046   45795 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 17:40:56.647057   45795 command_runner.go:130] > default_sysctls = [
	I0819 17:40:56.647067   45795 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 17:40:56.647073   45795 command_runner.go:130] > ]
	I0819 17:40:56.647084   45795 command_runner.go:130] > # List of devices on the host that a
	I0819 17:40:56.647098   45795 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 17:40:56.647109   45795 command_runner.go:130] > # allowed_devices = [
	I0819 17:40:56.647115   45795 command_runner.go:130] > # 	"/dev/fuse",
	I0819 17:40:56.647122   45795 command_runner.go:130] > # ]
	I0819 17:40:56.647130   45795 command_runner.go:130] > # List of additional devices. specified as
	I0819 17:40:56.647146   45795 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 17:40:56.647159   45795 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 17:40:56.647173   45795 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 17:40:56.647185   45795 command_runner.go:130] > # additional_devices = [
	I0819 17:40:56.647194   45795 command_runner.go:130] > # ]
	I0819 17:40:56.647204   45795 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 17:40:56.647219   45795 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 17:40:56.647230   45795 command_runner.go:130] > # 	"/etc/cdi",
	I0819 17:40:56.647238   45795 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 17:40:56.647248   45795 command_runner.go:130] > # ]
	I0819 17:40:56.647271   45795 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 17:40:56.647285   45795 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 17:40:56.647295   45795 command_runner.go:130] > # Defaults to false.
	I0819 17:40:56.647328   45795 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 17:40:56.647342   45795 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 17:40:56.647353   45795 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 17:40:56.647363   45795 command_runner.go:130] > # hooks_dir = [
	I0819 17:40:56.647372   45795 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 17:40:56.647378   45795 command_runner.go:130] > # ]
	I0819 17:40:56.647392   45795 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 17:40:56.647406   45795 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 17:40:56.647419   45795 command_runner.go:130] > # its default mounts from the following two files:
	I0819 17:40:56.647428   45795 command_runner.go:130] > #
	I0819 17:40:56.647439   45795 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 17:40:56.647452   45795 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 17:40:56.647466   45795 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 17:40:56.647475   45795 command_runner.go:130] > #
	I0819 17:40:56.647493   45795 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 17:40:56.647507   45795 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 17:40:56.647519   45795 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 17:40:56.647531   45795 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 17:40:56.647540   45795 command_runner.go:130] > #
	I0819 17:40:56.647549   45795 command_runner.go:130] > # default_mounts_file = ""
	I0819 17:40:56.647562   45795 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 17:40:56.647572   45795 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 17:40:56.647583   45795 command_runner.go:130] > pids_limit = 1024
	I0819 17:40:56.647593   45795 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 17:40:56.647607   45795 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 17:40:56.647621   45795 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 17:40:56.647637   45795 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 17:40:56.647648   45795 command_runner.go:130] > # log_size_max = -1
	I0819 17:40:56.647662   45795 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 17:40:56.647670   45795 command_runner.go:130] > # log_to_journald = false
	I0819 17:40:56.647691   45795 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 17:40:56.647703   45795 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 17:40:56.647719   45795 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 17:40:56.647739   45795 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 17:40:56.647758   45795 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 17:40:56.647767   45795 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 17:40:56.647778   45795 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 17:40:56.647789   45795 command_runner.go:130] > # read_only = false
	I0819 17:40:56.647799   45795 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 17:40:56.647813   45795 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 17:40:56.647832   45795 command_runner.go:130] > # live configuration reload.
	I0819 17:40:56.647843   45795 command_runner.go:130] > # log_level = "info"
	I0819 17:40:56.647866   45795 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 17:40:56.647875   45795 command_runner.go:130] > # This option supports live configuration reload.
	I0819 17:40:56.647885   45795 command_runner.go:130] > # log_filter = ""
	I0819 17:40:56.647896   45795 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 17:40:56.647909   45795 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 17:40:56.647919   45795 command_runner.go:130] > # separated by comma.
	I0819 17:40:56.647931   45795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 17:40:56.647942   45795 command_runner.go:130] > # uid_mappings = ""
	I0819 17:40:56.647951   45795 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 17:40:56.647962   45795 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 17:40:56.647976   45795 command_runner.go:130] > # separated by comma.
	I0819 17:40:56.647992   45795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 17:40:56.648002   45795 command_runner.go:130] > # gid_mappings = ""
	I0819 17:40:56.648013   45795 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 17:40:56.648026   45795 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 17:40:56.648038   45795 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 17:40:56.648055   45795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 17:40:56.648066   45795 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 17:40:56.648076   45795 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 17:40:56.648089   45795 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 17:40:56.648103   45795 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 17:40:56.648119   45795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 17:40:56.648130   45795 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 17:40:56.648141   45795 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 17:40:56.648154   45795 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 17:40:56.648164   45795 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 17:40:56.648173   45795 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 17:40:56.648187   45795 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 17:40:56.648196   45795 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 17:40:56.648203   45795 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 17:40:56.648209   45795 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 17:40:56.648220   45795 command_runner.go:130] > drop_infra_ctr = false
	I0819 17:40:56.648233   45795 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 17:40:56.648246   45795 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 17:40:56.648261   45795 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 17:40:56.648273   45795 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 17:40:56.648285   45795 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 17:40:56.648297   45795 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 17:40:56.648311   45795 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 17:40:56.648316   45795 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 17:40:56.648323   45795 command_runner.go:130] > # shared_cpuset = ""
	I0819 17:40:56.648328   45795 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 17:40:56.648333   45795 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 17:40:56.648340   45795 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 17:40:56.648347   45795 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 17:40:56.648354   45795 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 17:40:56.648360   45795 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 17:40:56.648368   45795 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 17:40:56.648372   45795 command_runner.go:130] > # enable_criu_support = false
	I0819 17:40:56.648377   45795 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 17:40:56.648385   45795 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 17:40:56.648392   45795 command_runner.go:130] > # enable_pod_events = false
	I0819 17:40:56.648398   45795 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 17:40:56.648406   45795 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 17:40:56.648412   45795 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 17:40:56.648419   45795 command_runner.go:130] > # default_runtime = "runc"
	I0819 17:40:56.648424   45795 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 17:40:56.648431   45795 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 17:40:56.648442   45795 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 17:40:56.648450   45795 command_runner.go:130] > # creation as a file is not desired either.
	I0819 17:40:56.648457   45795 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 17:40:56.648468   45795 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 17:40:56.648475   45795 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 17:40:56.648486   45795 command_runner.go:130] > # ]
	I0819 17:40:56.648495   45795 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 17:40:56.648504   45795 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 17:40:56.648512   45795 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 17:40:56.648518   45795 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 17:40:56.648523   45795 command_runner.go:130] > #
	I0819 17:40:56.648528   45795 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 17:40:56.648535   45795 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 17:40:56.648593   45795 command_runner.go:130] > # runtime_type = "oci"
	I0819 17:40:56.648601   45795 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 17:40:56.648606   45795 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 17:40:56.648611   45795 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 17:40:56.648618   45795 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 17:40:56.648622   45795 command_runner.go:130] > # monitor_env = []
	I0819 17:40:56.648629   45795 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 17:40:56.648634   45795 command_runner.go:130] > # allowed_annotations = []
	I0819 17:40:56.648641   45795 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 17:40:56.648648   45795 command_runner.go:130] > # Where:
	I0819 17:40:56.648655   45795 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 17:40:56.648663   45795 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 17:40:56.648672   45795 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 17:40:56.648680   45795 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 17:40:56.648686   45795 command_runner.go:130] > #   in $PATH.
	I0819 17:40:56.648692   45795 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 17:40:56.648697   45795 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 17:40:56.648703   45795 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 17:40:56.648709   45795 command_runner.go:130] > #   state.
	I0819 17:40:56.648715   45795 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 17:40:56.648723   45795 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 17:40:56.648732   45795 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 17:40:56.648740   45795 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 17:40:56.648784   45795 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 17:40:56.648796   45795 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 17:40:56.648801   45795 command_runner.go:130] > #   The currently recognized values are:
	I0819 17:40:56.648809   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 17:40:56.648818   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 17:40:56.648834   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 17:40:56.648844   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 17:40:56.648854   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 17:40:56.648863   45795 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 17:40:56.648872   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 17:40:56.648880   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 17:40:56.648885   45795 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 17:40:56.648894   45795 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 17:40:56.648901   45795 command_runner.go:130] > #   deprecated option "conmon".
	I0819 17:40:56.648908   45795 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 17:40:56.648915   45795 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 17:40:56.648922   45795 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 17:40:56.648929   45795 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 17:40:56.648935   45795 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 17:40:56.648943   45795 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 17:40:56.648949   45795 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 17:40:56.648956   45795 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 17:40:56.648959   45795 command_runner.go:130] > #
	I0819 17:40:56.648964   45795 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 17:40:56.648970   45795 command_runner.go:130] > #
	I0819 17:40:56.648976   45795 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 17:40:56.648985   45795 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 17:40:56.648991   45795 command_runner.go:130] > #
	I0819 17:40:56.648997   45795 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 17:40:56.649006   45795 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 17:40:56.649009   45795 command_runner.go:130] > #
	I0819 17:40:56.649015   45795 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 17:40:56.649022   45795 command_runner.go:130] > # feature.
	I0819 17:40:56.649028   45795 command_runner.go:130] > #
	I0819 17:40:56.649037   45795 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 17:40:56.649045   45795 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 17:40:56.649054   45795 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 17:40:56.649062   45795 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 17:40:56.649070   45795 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 17:40:56.649074   45795 command_runner.go:130] > #
	I0819 17:40:56.649079   45795 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 17:40:56.649094   45795 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 17:40:56.649101   45795 command_runner.go:130] > #
	I0819 17:40:56.649109   45795 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 17:40:56.649118   45795 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 17:40:56.649121   45795 command_runner.go:130] > #
	I0819 17:40:56.649127   45795 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 17:40:56.649135   45795 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 17:40:56.649142   45795 command_runner.go:130] > # limitation.
	I0819 17:40:56.649146   45795 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 17:40:56.649153   45795 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 17:40:56.649157   45795 command_runner.go:130] > runtime_type = "oci"
	I0819 17:40:56.649163   45795 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 17:40:56.649168   45795 command_runner.go:130] > runtime_config_path = ""
	I0819 17:40:56.649175   45795 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 17:40:56.649179   45795 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 17:40:56.649185   45795 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 17:40:56.649189   45795 command_runner.go:130] > monitor_env = [
	I0819 17:40:56.649197   45795 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 17:40:56.649204   45795 command_runner.go:130] > ]
	I0819 17:40:56.649209   45795 command_runner.go:130] > privileged_without_host_devices = false
	I0819 17:40:56.649219   45795 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 17:40:56.649227   45795 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 17:40:56.649233   45795 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 17:40:56.649243   45795 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 17:40:56.649252   45795 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 17:40:56.649260   45795 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 17:40:56.649273   45795 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 17:40:56.649283   45795 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 17:40:56.649288   45795 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 17:40:56.649297   45795 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 17:40:56.649305   45795 command_runner.go:130] > # Example:
	I0819 17:40:56.649309   45795 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 17:40:56.649313   45795 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 17:40:56.649317   45795 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 17:40:56.649322   45795 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 17:40:56.649325   45795 command_runner.go:130] > # cpuset = 0
	I0819 17:40:56.649334   45795 command_runner.go:130] > # cpushares = "0-1"
	I0819 17:40:56.649338   45795 command_runner.go:130] > # Where:
	I0819 17:40:56.649344   45795 command_runner.go:130] > # The workload name is workload-type.
	I0819 17:40:56.649351   45795 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 17:40:56.649356   45795 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 17:40:56.649361   45795 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 17:40:56.649368   45795 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 17:40:56.649374   45795 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 17:40:56.649378   45795 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 17:40:56.649384   45795 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 17:40:56.649388   45795 command_runner.go:130] > # Default value is set to true
	I0819 17:40:56.649392   45795 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 17:40:56.649397   45795 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 17:40:56.649401   45795 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 17:40:56.649405   45795 command_runner.go:130] > # Default value is set to 'false'
	I0819 17:40:56.649409   45795 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 17:40:56.649414   45795 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 17:40:56.649417   45795 command_runner.go:130] > #
	I0819 17:40:56.649423   45795 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 17:40:56.649428   45795 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 17:40:56.649434   45795 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 17:40:56.649440   45795 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 17:40:56.649445   45795 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 17:40:56.649448   45795 command_runner.go:130] > [crio.image]
	I0819 17:40:56.649454   45795 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 17:40:56.649458   45795 command_runner.go:130] > # default_transport = "docker://"
	I0819 17:40:56.649463   45795 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 17:40:56.649468   45795 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 17:40:56.649472   45795 command_runner.go:130] > # global_auth_file = ""
	I0819 17:40:56.649476   45795 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 17:40:56.649481   45795 command_runner.go:130] > # This option supports live configuration reload.
	I0819 17:40:56.649487   45795 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 17:40:56.649493   45795 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 17:40:56.649501   45795 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 17:40:56.649506   45795 command_runner.go:130] > # This option supports live configuration reload.
	I0819 17:40:56.649513   45795 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 17:40:56.649523   45795 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 17:40:56.649532   45795 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 17:40:56.649545   45795 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 17:40:56.649553   45795 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 17:40:56.649563   45795 command_runner.go:130] > # pause_command = "/pause"
	I0819 17:40:56.649569   45795 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 17:40:56.649578   45795 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 17:40:56.649586   45795 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 17:40:56.649595   45795 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 17:40:56.649600   45795 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 17:40:56.649609   45795 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 17:40:56.649615   45795 command_runner.go:130] > # pinned_images = [
	I0819 17:40:56.649622   45795 command_runner.go:130] > # ]
	I0819 17:40:56.649630   45795 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 17:40:56.649637   45795 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 17:40:56.649647   45795 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 17:40:56.649656   45795 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 17:40:56.649663   45795 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 17:40:56.649670   45795 command_runner.go:130] > # signature_policy = ""
	I0819 17:40:56.649675   45795 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 17:40:56.649684   45795 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 17:40:56.649691   45795 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 17:40:56.649699   45795 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 17:40:56.649707   45795 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 17:40:56.649714   45795 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 17:40:56.649723   45795 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 17:40:56.649731   45795 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 17:40:56.649738   45795 command_runner.go:130] > # changing them here.
	I0819 17:40:56.649742   45795 command_runner.go:130] > # insecure_registries = [
	I0819 17:40:56.649748   45795 command_runner.go:130] > # ]
	I0819 17:40:56.649755   45795 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 17:40:56.649762   45795 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 17:40:56.649769   45795 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 17:40:56.649777   45795 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 17:40:56.649784   45795 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 17:40:56.649789   45795 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 17:40:56.649800   45795 command_runner.go:130] > # CNI plugins.
	I0819 17:40:56.649807   45795 command_runner.go:130] > [crio.network]
	I0819 17:40:56.649813   45795 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 17:40:56.649823   45795 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 17:40:56.649830   45795 command_runner.go:130] > # cni_default_network = ""
	I0819 17:40:56.649836   45795 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 17:40:56.649843   45795 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 17:40:56.649863   45795 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 17:40:56.649874   45795 command_runner.go:130] > # plugin_dirs = [
	I0819 17:40:56.649890   45795 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 17:40:56.649900   45795 command_runner.go:130] > # ]
	I0819 17:40:56.649910   45795 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 17:40:56.649918   45795 command_runner.go:130] > [crio.metrics]
	I0819 17:40:56.649923   45795 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 17:40:56.649930   45795 command_runner.go:130] > enable_metrics = true
	I0819 17:40:56.649934   45795 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 17:40:56.649945   45795 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 17:40:56.649955   45795 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 17:40:56.649969   45795 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 17:40:56.649982   45795 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 17:40:56.649990   45795 command_runner.go:130] > # metrics_collectors = [
	I0819 17:40:56.649993   45795 command_runner.go:130] > # 	"operations",
	I0819 17:40:56.649998   45795 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 17:40:56.650005   45795 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 17:40:56.650009   45795 command_runner.go:130] > # 	"operations_errors",
	I0819 17:40:56.650014   45795 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 17:40:56.650018   45795 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 17:40:56.650025   45795 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 17:40:56.650035   45795 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 17:40:56.650043   45795 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 17:40:56.650054   45795 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 17:40:56.650065   45795 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 17:40:56.650075   45795 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 17:40:56.650084   45795 command_runner.go:130] > # 	"containers_oom_total",
	I0819 17:40:56.650091   45795 command_runner.go:130] > # 	"containers_oom",
	I0819 17:40:56.650101   45795 command_runner.go:130] > # 	"processes_defunct",
	I0819 17:40:56.650116   45795 command_runner.go:130] > # 	"operations_total",
	I0819 17:40:56.650129   45795 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 17:40:56.650141   45795 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 17:40:56.650149   45795 command_runner.go:130] > # 	"operations_errors_total",
	I0819 17:40:56.650162   45795 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 17:40:56.650169   45795 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 17:40:56.650174   45795 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 17:40:56.650178   45795 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 17:40:56.650182   45795 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 17:40:56.650186   45795 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 17:40:56.650191   45795 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 17:40:56.650195   45795 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 17:40:56.650201   45795 command_runner.go:130] > # ]
	I0819 17:40:56.650207   45795 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 17:40:56.650211   45795 command_runner.go:130] > # metrics_port = 9090
	I0819 17:40:56.650216   45795 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 17:40:56.650225   45795 command_runner.go:130] > # metrics_socket = ""
	I0819 17:40:56.650232   45795 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 17:40:56.650238   45795 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 17:40:56.650247   45795 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 17:40:56.650251   45795 command_runner.go:130] > # certificate on any modification event.
	I0819 17:40:56.650259   45795 command_runner.go:130] > # metrics_cert = ""
	I0819 17:40:56.650264   45795 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 17:40:56.650271   45795 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 17:40:56.650276   45795 command_runner.go:130] > # metrics_key = ""
	I0819 17:40:56.650281   45795 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 17:40:56.650287   45795 command_runner.go:130] > [crio.tracing]
	I0819 17:40:56.650295   45795 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 17:40:56.650305   45795 command_runner.go:130] > # enable_tracing = false
	I0819 17:40:56.650310   45795 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 17:40:56.650317   45795 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 17:40:56.650323   45795 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 17:40:56.650330   45795 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 17:40:56.650335   45795 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 17:40:56.650341   45795 command_runner.go:130] > [crio.nri]
	I0819 17:40:56.650345   45795 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 17:40:56.650359   45795 command_runner.go:130] > # enable_nri = false
	I0819 17:40:56.650366   45795 command_runner.go:130] > # NRI socket to listen on.
	I0819 17:40:56.650371   45795 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 17:40:56.650375   45795 command_runner.go:130] > # NRI plugin directory to use.
	I0819 17:40:56.650380   45795 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 17:40:56.650388   45795 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 17:40:56.650393   45795 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 17:40:56.650401   45795 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 17:40:56.650405   45795 command_runner.go:130] > # nri_disable_connections = false
	I0819 17:40:56.650412   45795 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 17:40:56.650417   45795 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 17:40:56.650425   45795 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 17:40:56.650429   45795 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 17:40:56.650437   45795 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 17:40:56.650441   45795 command_runner.go:130] > [crio.stats]
	I0819 17:40:56.650449   45795 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 17:40:56.650454   45795 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 17:40:56.650461   45795 command_runner.go:130] > # stats_collection_period = 0
	I0819 17:40:56.650495   45795 command_runner.go:130] ! time="2024-08-19 17:40:56.603425312Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 17:40:56.650514   45795 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 17:40:56.650655   45795 cni.go:84] Creating CNI manager for ""
	I0819 17:40:56.650670   45795 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 17:40:56.650682   45795 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:40:56.650707   45795 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-188752 NodeName:multinode-188752 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:40:56.650836   45795 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-188752"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:40:56.650900   45795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:40:56.660985   45795 command_runner.go:130] > kubeadm
	I0819 17:40:56.661002   45795 command_runner.go:130] > kubectl
	I0819 17:40:56.661007   45795 command_runner.go:130] > kubelet
	I0819 17:40:56.661022   45795 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:40:56.661076   45795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:40:56.669985   45795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0819 17:40:56.687212   45795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:40:56.703554   45795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 17:40:56.720273   45795 ssh_runner.go:195] Run: grep 192.168.39.69	control-plane.minikube.internal$ /etc/hosts
	I0819 17:40:56.723648   45795 command_runner.go:130] > 192.168.39.69	control-plane.minikube.internal
	I0819 17:40:56.723757   45795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:40:56.865597   45795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:40:56.879937   45795 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752 for IP: 192.168.39.69
	I0819 17:40:56.879963   45795 certs.go:194] generating shared ca certs ...
	I0819 17:40:56.879977   45795 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:40:56.880117   45795 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:40:56.880155   45795 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:40:56.880164   45795 certs.go:256] generating profile certs ...
	I0819 17:40:56.880232   45795 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/client.key
	I0819 17:40:56.880290   45795 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/apiserver.key.a6c14ce1
	I0819 17:40:56.880325   45795 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/proxy-client.key
	I0819 17:40:56.880338   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 17:40:56.880353   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 17:40:56.880366   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 17:40:56.880377   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 17:40:56.880389   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 17:40:56.880401   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 17:40:56.880414   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 17:40:56.880425   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 17:40:56.880485   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:40:56.880515   45795 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:40:56.880523   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:40:56.880547   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:40:56.880570   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:40:56.880600   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:40:56.880636   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:40:56.880661   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:40:56.880673   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem -> /usr/share/ca-certificates/17837.pem
	I0819 17:40:56.880686   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /usr/share/ca-certificates/178372.pem
	I0819 17:40:56.881249   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:40:56.904165   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:40:56.926584   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:40:56.949480   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:40:56.971252   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 17:40:56.993205   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0819 17:40:57.014749   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:40:57.035945   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 17:40:57.057937   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:40:57.080724   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:40:57.102394   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:40:57.123552   45795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:40:57.138379   45795 ssh_runner.go:195] Run: openssl version
	I0819 17:40:57.143794   45795 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 17:40:57.143864   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:40:57.153933   45795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:40:57.157847   45795 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:40:57.157882   45795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:40:57.157922   45795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:40:57.162924   45795 command_runner.go:130] > b5213941
	I0819 17:40:57.162976   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:40:57.171396   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:40:57.182745   45795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:40:57.186895   45795 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:40:57.186979   45795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:40:57.187029   45795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:40:57.192457   45795 command_runner.go:130] > 51391683
	I0819 17:40:57.192683   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:40:57.202681   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:40:57.214422   45795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:40:57.218425   45795 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:40:57.218601   45795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:40:57.218638   45795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:40:57.223709   45795 command_runner.go:130] > 3ec20f2e
	I0819 17:40:57.224003   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:40:57.234441   45795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:40:57.238657   45795 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:40:57.238680   45795 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 17:40:57.238689   45795 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 17:40:57.238699   45795 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 17:40:57.238710   45795 command_runner.go:130] > Access: 2024-08-19 17:34:06.999962379 +0000
	I0819 17:40:57.238715   45795 command_runner.go:130] > Modify: 2024-08-19 17:34:06.999962379 +0000
	I0819 17:40:57.238722   45795 command_runner.go:130] > Change: 2024-08-19 17:34:06.999962379 +0000
	I0819 17:40:57.238727   45795 command_runner.go:130] >  Birth: 2024-08-19 17:34:06.999962379 +0000
	I0819 17:40:57.238770   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 17:40:57.243974   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.244143   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 17:40:57.249692   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.249775   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 17:40:57.255002   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.255246   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 17:40:57.260321   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.260542   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 17:40:57.269823   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.269884   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 17:40:57.282056   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.282410   45795 kubeadm.go:392] StartCluster: {Name:multinode-188752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-188752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:40:57.282521   45795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:40:57.282581   45795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:40:57.370072   45795 command_runner.go:130] > 81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d
	I0819 17:40:57.370122   45795 command_runner.go:130] > 1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205
	I0819 17:40:57.370130   45795 command_runner.go:130] > 2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1
	I0819 17:40:57.370137   45795 command_runner.go:130] > 176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930
	I0819 17:40:57.370143   45795 command_runner.go:130] > 3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433
	I0819 17:40:57.370148   45795 command_runner.go:130] > 1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb
	I0819 17:40:57.370156   45795 command_runner.go:130] > 25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e
	I0819 17:40:57.370163   45795 command_runner.go:130] > 37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765
	I0819 17:40:57.370184   45795 cri.go:89] found id: "81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d"
	I0819 17:40:57.370193   45795 cri.go:89] found id: "1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205"
	I0819 17:40:57.370197   45795 cri.go:89] found id: "2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1"
	I0819 17:40:57.370200   45795 cri.go:89] found id: "176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930"
	I0819 17:40:57.370202   45795 cri.go:89] found id: "3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433"
	I0819 17:40:57.370205   45795 cri.go:89] found id: "1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb"
	I0819 17:40:57.370208   45795 cri.go:89] found id: "25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e"
	I0819 17:40:57.370211   45795 cri.go:89] found id: "37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765"
	I0819 17:40:57.370215   45795 cri.go:89] found id: ""
	I0819 17:40:57.370268   45795 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.838400276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089363838374328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dab0f980-584a-4c6e-afe1-58667612901c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.838954673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3eb15d37-bb70-4c1f-aa47-2e4d000cf782 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.839008623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3eb15d37-bb70-4c1f-aa47-2e4d000cf782 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.839385269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07ca491dfda88c11465c6d2e96fa17d054197175d265ccedd52a92d9442981f0,PodSandboxId:674214e5d35525e4fc3cec5b806a88cd360c1b8d1b3804152a8993d3d131033a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724089297212399068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e42d64f923ad106c5681e0337d1002718a3d7e697e086fc7275996c921a31f4,PodSandboxId:9a18756d8dd4dba6a73e73f14fc683848a448ea49fe63ae2596c96710e40c461,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724089263652342831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5767babbe0bceb68a428628336811a88b11f3d29d047c7ff6dcfb05f06a46e47,PodSandboxId:30aee9f2fdde498e409447cef0444adcd49a590f730492efcf43c862da396f02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089263460899904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f07663a86a1fa6ff64c06a3ca6e88446673e98ebeed1ef810dbe02c48c75dab,PodSandboxId:5b5043e2e12c5417cb56910f33e9970a577707e200ddd00cb94c9e78caba9cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089263450602001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8941e42317dea38472221549875240680597844704007c0a764ac708a8647893,PodSandboxId:5b311b241b4cdac3071e62b61874ef25a7efcbb8488bf6d508ef4d45f7f81e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089259656804650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:1b2d87d82d3f8a7de004cd01d22076fd7972000fc3eda41fad9af97730b0a02c,PodSandboxId:d7390fc89932e52297ea90685d68d5398b15a82a770daff3ae3dca5fa7ebb67a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089259648925652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:953f745a20681d20faf684c37d4ba7423292c88cb25c5dd0f7eaa0c8f3b29c06,PodSandboxId:00bd52dbedae85ee150a7a0cf21a370ff1773139ee68bb722ed96ecb32e3b496,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089259646156174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da55b
56059e42ae778081cc89b18fa2733c79007e2d326522c513e4d705777,PodSandboxId:232ec0a844a3bb2e35b662ac6cd6551a5d59494e61c11a5dcdaaaeee00266d9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089259597951698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:216e4c8e109638778cfd96f3abca621dacc51ee69d4de097ad14a23e73d543fc,PodSandboxId:4056fb23938f40126314f1fa1ef33af61f0b43874a65158fce1ea4be6add3074,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089257474213122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abd882b91b50e076d63907c439c187114dd89191eb5e6a69437d630089433ee,PodSandboxId:af2d8a3af05f23891bc3835bf8e12dd53eca23a880d435ed82c18e50b9a006ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724088935561145439,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d,PodSandboxId:29e22f9fa14322caa50bb0faf7356dc9d9db04e3ef599448a453971d39600d1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724088877120420896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205,PodSandboxId:566e4b7ea183c0aa9ace79fc2a858586e8e84a42a8c6fd8034cae4bde65ca873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724088877066898310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1,PodSandboxId:28f4d55043b44bf3e91ddb48e286ca7970493a85d37ce0d024a2c06fbafa9659,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724088865366766605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930,PodSandboxId:989926b39be5f89b1387400a25b4c9a16f2c85658b26008dde1b48cbdd943b4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724088861779125933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433,PodSandboxId:c2327989a19cb3143de0cfc13899547e33791d168ca26d088a4ba654f3d43725,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724088850737120984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb,PodSandboxId:180db7099269a10792b9f9113b2e3cc4e9326214089c85418b74dc20b96a0c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724088850725163427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e,PodSandboxId:910d2d20c5a51395eda2258a7dc233fe444d4adb1223e482148e445d790e67e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724088850653468186,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765,PodSandboxId:39c06d029ce8e31747c4358bad9229eadd4875359b572fad8bf23b1a9af81d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724088850592417304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3eb15d37-bb70-4c1f-aa47-2e4d000cf782 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.878548688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01be694d-e542-4109-a037-aac023564d5a name=/runtime.v1.RuntimeService/Version
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.878677577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01be694d-e542-4109-a037-aac023564d5a name=/runtime.v1.RuntimeService/Version
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.880024367Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=473bbeef-e346-459c-aaa3-a6ffbb6fb102 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.880470095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089363880447720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=473bbeef-e346-459c-aaa3-a6ffbb6fb102 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.881131614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=237c89b5-5aa3-4208-85ae-090a2158bc4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.881201206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=237c89b5-5aa3-4208-85ae-090a2158bc4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.881543357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07ca491dfda88c11465c6d2e96fa17d054197175d265ccedd52a92d9442981f0,PodSandboxId:674214e5d35525e4fc3cec5b806a88cd360c1b8d1b3804152a8993d3d131033a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724089297212399068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e42d64f923ad106c5681e0337d1002718a3d7e697e086fc7275996c921a31f4,PodSandboxId:9a18756d8dd4dba6a73e73f14fc683848a448ea49fe63ae2596c96710e40c461,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724089263652342831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5767babbe0bceb68a428628336811a88b11f3d29d047c7ff6dcfb05f06a46e47,PodSandboxId:30aee9f2fdde498e409447cef0444adcd49a590f730492efcf43c862da396f02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089263460899904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f07663a86a1fa6ff64c06a3ca6e88446673e98ebeed1ef810dbe02c48c75dab,PodSandboxId:5b5043e2e12c5417cb56910f33e9970a577707e200ddd00cb94c9e78caba9cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089263450602001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8941e42317dea38472221549875240680597844704007c0a764ac708a8647893,PodSandboxId:5b311b241b4cdac3071e62b61874ef25a7efcbb8488bf6d508ef4d45f7f81e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089259656804650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:1b2d87d82d3f8a7de004cd01d22076fd7972000fc3eda41fad9af97730b0a02c,PodSandboxId:d7390fc89932e52297ea90685d68d5398b15a82a770daff3ae3dca5fa7ebb67a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089259648925652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:953f745a20681d20faf684c37d4ba7423292c88cb25c5dd0f7eaa0c8f3b29c06,PodSandboxId:00bd52dbedae85ee150a7a0cf21a370ff1773139ee68bb722ed96ecb32e3b496,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089259646156174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da55b
56059e42ae778081cc89b18fa2733c79007e2d326522c513e4d705777,PodSandboxId:232ec0a844a3bb2e35b662ac6cd6551a5d59494e61c11a5dcdaaaeee00266d9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089259597951698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:216e4c8e109638778cfd96f3abca621dacc51ee69d4de097ad14a23e73d543fc,PodSandboxId:4056fb23938f40126314f1fa1ef33af61f0b43874a65158fce1ea4be6add3074,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089257474213122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abd882b91b50e076d63907c439c187114dd89191eb5e6a69437d630089433ee,PodSandboxId:af2d8a3af05f23891bc3835bf8e12dd53eca23a880d435ed82c18e50b9a006ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724088935561145439,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d,PodSandboxId:29e22f9fa14322caa50bb0faf7356dc9d9db04e3ef599448a453971d39600d1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724088877120420896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205,PodSandboxId:566e4b7ea183c0aa9ace79fc2a858586e8e84a42a8c6fd8034cae4bde65ca873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724088877066898310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1,PodSandboxId:28f4d55043b44bf3e91ddb48e286ca7970493a85d37ce0d024a2c06fbafa9659,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724088865366766605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930,PodSandboxId:989926b39be5f89b1387400a25b4c9a16f2c85658b26008dde1b48cbdd943b4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724088861779125933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433,PodSandboxId:c2327989a19cb3143de0cfc13899547e33791d168ca26d088a4ba654f3d43725,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724088850737120984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb,PodSandboxId:180db7099269a10792b9f9113b2e3cc4e9326214089c85418b74dc20b96a0c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724088850725163427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e,PodSandboxId:910d2d20c5a51395eda2258a7dc233fe444d4adb1223e482148e445d790e67e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724088850653468186,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765,PodSandboxId:39c06d029ce8e31747c4358bad9229eadd4875359b572fad8bf23b1a9af81d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724088850592417304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=237c89b5-5aa3-4208-85ae-090a2158bc4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.926044707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3abc2538-5856-4d7a-963f-ad3c50456d72 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.926141725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3abc2538-5856-4d7a-963f-ad3c50456d72 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.931672186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51d89021-de5b-4cf8-b8e9-9ca1ae6d0b2c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.932084396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089363932062968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51d89021-de5b-4cf8-b8e9-9ca1ae6d0b2c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.932824114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e43b3429-f382-479d-b21d-050bcb382eae name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.932898674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e43b3429-f382-479d-b21d-050bcb382eae name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.933270882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07ca491dfda88c11465c6d2e96fa17d054197175d265ccedd52a92d9442981f0,PodSandboxId:674214e5d35525e4fc3cec5b806a88cd360c1b8d1b3804152a8993d3d131033a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724089297212399068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e42d64f923ad106c5681e0337d1002718a3d7e697e086fc7275996c921a31f4,PodSandboxId:9a18756d8dd4dba6a73e73f14fc683848a448ea49fe63ae2596c96710e40c461,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724089263652342831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5767babbe0bceb68a428628336811a88b11f3d29d047c7ff6dcfb05f06a46e47,PodSandboxId:30aee9f2fdde498e409447cef0444adcd49a590f730492efcf43c862da396f02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089263460899904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f07663a86a1fa6ff64c06a3ca6e88446673e98ebeed1ef810dbe02c48c75dab,PodSandboxId:5b5043e2e12c5417cb56910f33e9970a577707e200ddd00cb94c9e78caba9cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089263450602001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8941e42317dea38472221549875240680597844704007c0a764ac708a8647893,PodSandboxId:5b311b241b4cdac3071e62b61874ef25a7efcbb8488bf6d508ef4d45f7f81e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089259656804650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:1b2d87d82d3f8a7de004cd01d22076fd7972000fc3eda41fad9af97730b0a02c,PodSandboxId:d7390fc89932e52297ea90685d68d5398b15a82a770daff3ae3dca5fa7ebb67a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089259648925652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:953f745a20681d20faf684c37d4ba7423292c88cb25c5dd0f7eaa0c8f3b29c06,PodSandboxId:00bd52dbedae85ee150a7a0cf21a370ff1773139ee68bb722ed96ecb32e3b496,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089259646156174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da55b
56059e42ae778081cc89b18fa2733c79007e2d326522c513e4d705777,PodSandboxId:232ec0a844a3bb2e35b662ac6cd6551a5d59494e61c11a5dcdaaaeee00266d9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089259597951698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:216e4c8e109638778cfd96f3abca621dacc51ee69d4de097ad14a23e73d543fc,PodSandboxId:4056fb23938f40126314f1fa1ef33af61f0b43874a65158fce1ea4be6add3074,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089257474213122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abd882b91b50e076d63907c439c187114dd89191eb5e6a69437d630089433ee,PodSandboxId:af2d8a3af05f23891bc3835bf8e12dd53eca23a880d435ed82c18e50b9a006ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724088935561145439,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d,PodSandboxId:29e22f9fa14322caa50bb0faf7356dc9d9db04e3ef599448a453971d39600d1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724088877120420896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205,PodSandboxId:566e4b7ea183c0aa9ace79fc2a858586e8e84a42a8c6fd8034cae4bde65ca873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724088877066898310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1,PodSandboxId:28f4d55043b44bf3e91ddb48e286ca7970493a85d37ce0d024a2c06fbafa9659,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724088865366766605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930,PodSandboxId:989926b39be5f89b1387400a25b4c9a16f2c85658b26008dde1b48cbdd943b4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724088861779125933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433,PodSandboxId:c2327989a19cb3143de0cfc13899547e33791d168ca26d088a4ba654f3d43725,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724088850737120984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb,PodSandboxId:180db7099269a10792b9f9113b2e3cc4e9326214089c85418b74dc20b96a0c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724088850725163427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e,PodSandboxId:910d2d20c5a51395eda2258a7dc233fe444d4adb1223e482148e445d790e67e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724088850653468186,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765,PodSandboxId:39c06d029ce8e31747c4358bad9229eadd4875359b572fad8bf23b1a9af81d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724088850592417304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e43b3429-f382-479d-b21d-050bcb382eae name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.971363901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a675c49c-fe19-4a7b-82e0-2b8b09a89073 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.971449184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a675c49c-fe19-4a7b-82e0-2b8b09a89073 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.972764139Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=720b0451-59ff-4c60-8378-099c4e6fde57 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.973216520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089363973192407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=720b0451-59ff-4c60-8378-099c4e6fde57 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.973892699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d694fa5-ac89-4347-bbaa-f1c88b762d4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.973964066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d694fa5-ac89-4347-bbaa-f1c88b762d4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:42:43 multinode-188752 crio[2747]: time="2024-08-19 17:42:43.974302325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07ca491dfda88c11465c6d2e96fa17d054197175d265ccedd52a92d9442981f0,PodSandboxId:674214e5d35525e4fc3cec5b806a88cd360c1b8d1b3804152a8993d3d131033a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724089297212399068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e42d64f923ad106c5681e0337d1002718a3d7e697e086fc7275996c921a31f4,PodSandboxId:9a18756d8dd4dba6a73e73f14fc683848a448ea49fe63ae2596c96710e40c461,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724089263652342831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5767babbe0bceb68a428628336811a88b11f3d29d047c7ff6dcfb05f06a46e47,PodSandboxId:30aee9f2fdde498e409447cef0444adcd49a590f730492efcf43c862da396f02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089263460899904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f07663a86a1fa6ff64c06a3ca6e88446673e98ebeed1ef810dbe02c48c75dab,PodSandboxId:5b5043e2e12c5417cb56910f33e9970a577707e200ddd00cb94c9e78caba9cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089263450602001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8941e42317dea38472221549875240680597844704007c0a764ac708a8647893,PodSandboxId:5b311b241b4cdac3071e62b61874ef25a7efcbb8488bf6d508ef4d45f7f81e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089259656804650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:1b2d87d82d3f8a7de004cd01d22076fd7972000fc3eda41fad9af97730b0a02c,PodSandboxId:d7390fc89932e52297ea90685d68d5398b15a82a770daff3ae3dca5fa7ebb67a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089259648925652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:953f745a20681d20faf684c37d4ba7423292c88cb25c5dd0f7eaa0c8f3b29c06,PodSandboxId:00bd52dbedae85ee150a7a0cf21a370ff1773139ee68bb722ed96ecb32e3b496,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089259646156174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da55b
56059e42ae778081cc89b18fa2733c79007e2d326522c513e4d705777,PodSandboxId:232ec0a844a3bb2e35b662ac6cd6551a5d59494e61c11a5dcdaaaeee00266d9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089259597951698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:216e4c8e109638778cfd96f3abca621dacc51ee69d4de097ad14a23e73d543fc,PodSandboxId:4056fb23938f40126314f1fa1ef33af61f0b43874a65158fce1ea4be6add3074,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089257474213122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abd882b91b50e076d63907c439c187114dd89191eb5e6a69437d630089433ee,PodSandboxId:af2d8a3af05f23891bc3835bf8e12dd53eca23a880d435ed82c18e50b9a006ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724088935561145439,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d,PodSandboxId:29e22f9fa14322caa50bb0faf7356dc9d9db04e3ef599448a453971d39600d1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724088877120420896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205,PodSandboxId:566e4b7ea183c0aa9ace79fc2a858586e8e84a42a8c6fd8034cae4bde65ca873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724088877066898310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1,PodSandboxId:28f4d55043b44bf3e91ddb48e286ca7970493a85d37ce0d024a2c06fbafa9659,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724088865366766605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930,PodSandboxId:989926b39be5f89b1387400a25b4c9a16f2c85658b26008dde1b48cbdd943b4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724088861779125933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433,PodSandboxId:c2327989a19cb3143de0cfc13899547e33791d168ca26d088a4ba654f3d43725,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724088850737120984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb,PodSandboxId:180db7099269a10792b9f9113b2e3cc4e9326214089c85418b74dc20b96a0c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724088850725163427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e,PodSandboxId:910d2d20c5a51395eda2258a7dc233fe444d4adb1223e482148e445d790e67e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724088850653468186,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765,PodSandboxId:39c06d029ce8e31747c4358bad9229eadd4875359b572fad8bf23b1a9af81d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724088850592417304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d694fa5-ac89-4347-bbaa-f1c88b762d4a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	07ca491dfda88       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   674214e5d3552       busybox-7dff88458-vxmhm
	2e42d64f923ad       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   9a18756d8dd4d       kindnet-ncksr
	5767babbe0bce       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   30aee9f2fdde4       kube-proxy-56fnf
	0f07663a86a1f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   5b5043e2e12c5       storage-provisioner
	8941e42317dea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   5b311b241b4cd       etcd-multinode-188752
	1b2d87d82d3f8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   d7390fc89932e       kube-scheduler-multinode-188752
	953f745a20681       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   00bd52dbedae8       kube-apiserver-multinode-188752
	83da55b56059e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   232ec0a844a3b       kube-controller-manager-multinode-188752
	216e4c8e10963       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   4056fb23938f4       coredns-6f6b679f8f-mnbvf
	1abd882b91b50       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   af2d8a3af05f2       busybox-7dff88458-vxmhm
	81a9d4f57a424       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   29e22f9fa1432       coredns-6f6b679f8f-mnbvf
	1f84fa074cce8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   566e4b7ea183c       storage-provisioner
	2742257ec8503       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   28f4d55043b44       kindnet-ncksr
	176f9fa0d86f6       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   989926b39be5f       kube-proxy-56fnf
	3bee0cdeb76b7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   c2327989a19cb       etcd-multinode-188752
	1ba01f8ae738a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   180db7099269a       kube-scheduler-multinode-188752
	25d4ed3bd6626       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   910d2d20c5a51       kube-controller-manager-multinode-188752
	37d1d2de67baa       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   39c06d029ce8e       kube-apiserver-multinode-188752
	
	
	==> coredns [216e4c8e109638778cfd96f3abca621dacc51ee69d4de097ad14a23e73d543fc] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38720 - 55441 "HINFO IN 5343997893510302207.9165578157836977038. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018944069s
	
	
	==> coredns [81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d] <==
	[INFO] 10.244.0.3:41851 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001746508s
	[INFO] 10.244.0.3:42665 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000057764s
	[INFO] 10.244.0.3:42099 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000025724s
	[INFO] 10.244.0.3:38323 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001091561s
	[INFO] 10.244.0.3:53058 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061165s
	[INFO] 10.244.0.3:42543 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043406s
	[INFO] 10.244.0.3:46403 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034173s
	[INFO] 10.244.1.2:57075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115734s
	[INFO] 10.244.1.2:34531 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147361s
	[INFO] 10.244.1.2:57457 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089663s
	[INFO] 10.244.1.2:40116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008145s
	[INFO] 10.244.0.3:50771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092382s
	[INFO] 10.244.0.3:50393 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113357s
	[INFO] 10.244.0.3:41834 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042065s
	[INFO] 10.244.0.3:36633 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040365s
	[INFO] 10.244.1.2:32801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134907s
	[INFO] 10.244.1.2:37751 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000157753s
	[INFO] 10.244.1.2:54066 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116598s
	[INFO] 10.244.1.2:39363 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087178s
	[INFO] 10.244.0.3:35037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118078s
	[INFO] 10.244.0.3:35544 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073756s
	[INFO] 10.244.0.3:48081 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068742s
	[INFO] 10.244.0.3:39718 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000050659s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-188752
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-188752
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=multinode-188752
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_34_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:34:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-188752
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:42:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:41:02 +0000   Mon, 19 Aug 2024 17:34:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:41:02 +0000   Mon, 19 Aug 2024 17:34:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:41:02 +0000   Mon, 19 Aug 2024 17:34:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:41:02 +0000   Mon, 19 Aug 2024 17:34:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    multinode-188752
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 724ce6a54c4c477cb4868dea45e6dda4
	  System UUID:                724ce6a5-4c4c-477c-b486-8dea45e6dda4
	  Boot ID:                    606b75a4-7cc0-4e88-b238-d5c7997ed47c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vxmhm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 coredns-6f6b679f8f-mnbvf                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m23s
	  kube-system                 etcd-multinode-188752                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m29s
	  kube-system                 kindnet-ncksr                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m23s
	  kube-system                 kube-apiserver-multinode-188752             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-controller-manager-multinode-188752    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-proxy-56fnf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-multinode-188752             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m22s                kube-proxy       
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  NodeHasSufficientPID     8m28s                kubelet          Node multinode-188752 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m28s                kubelet          Node multinode-188752 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s                kubelet          Node multinode-188752 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m28s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m24s                node-controller  Node multinode-188752 event: Registered Node multinode-188752 in Controller
	  Normal  NodeReady                8m8s                 kubelet          Node multinode-188752 status is now: NodeReady
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node multinode-188752 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node multinode-188752 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node multinode-188752 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                  node-controller  Node multinode-188752 event: Registered Node multinode-188752 in Controller
	
	
	Name:               multinode-188752-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-188752-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=multinode-188752
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_41_43_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:41:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-188752-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:42:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:42:13 +0000   Mon, 19 Aug 2024 17:41:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:42:13 +0000   Mon, 19 Aug 2024 17:41:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:42:13 +0000   Mon, 19 Aug 2024 17:41:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:42:13 +0000   Mon, 19 Aug 2024 17:42:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    multinode-188752-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d66ec704151c48f9a62b2041a5b6525c
	  System UUID:                d66ec704-151c-48f9-a62b-2041a5b6525c
	  Boot ID:                    79179e9c-db29-40c4-97f1-d50f7fc8184b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7z224    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-4s8lm              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m34s
	  kube-system                 kube-proxy-svsc7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m28s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m35s (x2 over 7m35s)  kubelet     Node multinode-188752-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s (x2 over 7m35s)  kubelet     Node multinode-188752-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s (x2 over 7m35s)  kubelet     Node multinode-188752-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m14s                  kubelet     Node multinode-188752-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-188752-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-188752-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-188752-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-188752-m02 status is now: NodeReady
	
	
	Name:               multinode-188752-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-188752-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=multinode-188752
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_42_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:42:21 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-188752-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:42:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:42:41 +0000   Mon, 19 Aug 2024 17:42:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:42:41 +0000   Mon, 19 Aug 2024 17:42:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:42:41 +0000   Mon, 19 Aug 2024 17:42:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:42:41 +0000   Mon, 19 Aug 2024 17:42:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    multinode-188752-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e856311199384110ab8d20086a8cce58
	  System UUID:                e8563111-9938-4110-ab8d-20086a8cce58
	  Boot ID:                    3cf88b5e-0c67-49d0-a0ac-0a51a8ff7691
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dhm77       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m35s
	  kube-system                 kube-proxy-kqw6z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  Starting                 6m30s                  kube-proxy       
	  Normal  Starting                 5m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m35s (x2 over 6m36s)  kubelet          Node multinode-188752-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x2 over 6m36s)  kubelet          Node multinode-188752-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x2 over 6m36s)  kubelet          Node multinode-188752-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m16s                  kubelet          Node multinode-188752-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m47s)  kubelet          Node multinode-188752-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m47s)  kubelet          Node multinode-188752-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m47s)  kubelet          Node multinode-188752-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m27s                  kubelet          Node multinode-188752-m03 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     23s                    cidrAllocator    Node multinode-188752-m03 status is now: CIDRAssignmentFailed
	  Normal  Starting                 23s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet          Node multinode-188752-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet          Node multinode-188752-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet          Node multinode-188752-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                    node-controller  Node multinode-188752-m03 event: Registered Node multinode-188752-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-188752-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.054968] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069178] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.191772] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.118720] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.264512] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.837018] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.611150] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.061918] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.483059] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.078975] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.104746] systemd-fstab-generator[1323]: Ignoring "noauto" option for root device
	[  +0.123578] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.070599] kauditd_printk_skb: 58 callbacks suppressed
	[Aug19 17:35] kauditd_printk_skb: 14 callbacks suppressed
	[Aug19 17:40] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.148300] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +0.159805] systemd-fstab-generator[2692]: Ignoring "noauto" option for root device
	[  +0.139742] systemd-fstab-generator[2704]: Ignoring "noauto" option for root device
	[  +0.260045] systemd-fstab-generator[2732]: Ignoring "noauto" option for root device
	[  +1.977523] systemd-fstab-generator[2829]: Ignoring "noauto" option for root device
	[  +1.994487] systemd-fstab-generator[3056]: Ignoring "noauto" option for root device
	[  +0.728166] kauditd_printk_skb: 154 callbacks suppressed
	[Aug19 17:41] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.505691] systemd-fstab-generator[3788]: Ignoring "noauto" option for root device
	[ +18.453629] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433] <==
	{"level":"info","ts":"2024-08-19T17:34:12.163855Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:34:12.180731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.69:2379"}
	{"level":"warn","ts":"2024-08-19T17:35:09.886907Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.850146ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:35:09.887190Z","caller":"traceutil/trace.go:171","msg":"trace[953954464] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:450; }","duration":"152.18195ms","start":"2024-08-19T17:35:09.734997Z","end":"2024-08-19T17:35:09.887178Z","steps":["trace[953954464] 'range keys from in-memory index tree'  (duration: 151.838482ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:35:09.887085Z","caller":"traceutil/trace.go:171","msg":"trace[1185464655] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"206.126437ms","start":"2024-08-19T17:35:09.680946Z","end":"2024-08-19T17:35:09.887072Z","steps":["trace[1185464655] 'process raft request'  (duration: 204.705459ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:35:13.096387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.035586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:35:13.096437Z","caller":"traceutil/trace.go:171","msg":"trace[182870289] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:482; }","duration":"140.118283ms","start":"2024-08-19T17:35:12.956308Z","end":"2024-08-19T17:35:13.096426Z","steps":["trace[182870289] 'range keys from in-memory index tree'  (duration: 139.9871ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:36:09.031330Z","caller":"traceutil/trace.go:171","msg":"trace[100700449] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:621; }","duration":"131.902475ms","start":"2024-08-19T17:36:08.899394Z","end":"2024-08-19T17:36:09.031296Z","steps":["trace[100700449] 'read index received'  (duration: 128.236213ms)","trace[100700449] 'applied index is now lower than readState.Index'  (duration: 3.664681ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:36:09.031388Z","caller":"traceutil/trace.go:171","msg":"trace[292842797] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"132.007782ms","start":"2024-08-19T17:36:08.899359Z","end":"2024-08-19T17:36:09.031367Z","steps":["trace[292842797] 'process raft request'  (duration: 128.310256ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:36:09.031781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.325304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-188752-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:36:09.031836Z","caller":"traceutil/trace.go:171","msg":"trace[763824854] range","detail":"{range_begin:/registry/minions/multinode-188752-m03; range_end:; response_count:0; response_revision:590; }","duration":"132.438192ms","start":"2024-08-19T17:36:08.899390Z","end":"2024-08-19T17:36:09.031828Z","steps":["trace[763824854] 'agreement among raft nodes before linearized reading'  (duration: 132.02645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:36:10.380248Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.837851ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10492139648658991075 > lease_revoke:<id:119b916bb41c675d>","response":"size:28"}
	{"level":"info","ts":"2024-08-19T17:36:10.717518Z","caller":"traceutil/trace.go:171","msg":"trace[557239600] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"178.166794ms","start":"2024-08-19T17:36:10.539338Z","end":"2024-08-19T17:36:10.717505Z","steps":["trace[557239600] 'process raft request'  (duration: 178.018137ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:37:06.146640Z","caller":"traceutil/trace.go:171","msg":"trace[1195799428] transaction","detail":"{read_only:false; response_revision:723; number_of_response:1; }","duration":"113.310467ms","start":"2024-08-19T17:37:06.033253Z","end":"2024-08-19T17:37:06.146563Z","steps":["trace[1195799428] 'process raft request'  (duration: 113.201333ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:39:22.836913Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T17:39:22.837062Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-188752","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	{"level":"warn","ts":"2024-08-19T17:39:22.837192Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T17:39:22.837304Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/08/19 17:39:22 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T17:39:22.885164Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T17:39:22.885203Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T17:39:22.885288Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9199217ddd03919b","current-leader-member-id":"9199217ddd03919b"}
	{"level":"info","ts":"2024-08-19T17:39:22.887796Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-08-19T17:39:22.887963Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-08-19T17:39:22.887985Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-188752","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	
	
	==> etcd [8941e42317dea38472221549875240680597844704007c0a764ac708a8647893] <==
	{"level":"info","ts":"2024-08-19T17:41:00.064693Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","added-peer-id":"9199217ddd03919b","added-peer-peer-urls":["https://192.168.39.69:2380"]}
	{"level":"info","ts":"2024-08-19T17:41:00.064914Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T17:41:00.064994Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T17:41:00.068227Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:41:00.068928Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:41:00.068940Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:41:00.072918Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-08-19T17:41:00.072944Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-08-19T17:41:01.297260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T17:41:01.297313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T17:41:01.297355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgPreVoteResp from 9199217ddd03919b at term 2"}
	{"level":"info","ts":"2024-08-19T17:41:01.297375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T17:41:01.297383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgVoteResp from 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2024-08-19T17:41:01.297397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became leader at term 3"}
	{"level":"info","ts":"2024-08-19T17:41:01.297406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9199217ddd03919b elected leader 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2024-08-19T17:41:01.302545Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9199217ddd03919b","local-member-attributes":"{Name:multinode-188752 ClientURLs:[https://192.168.39.69:2379]}","request-path":"/0/members/9199217ddd03919b/attributes","cluster-id":"6c21f62219c1156b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T17:41:01.302770Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:41:01.302853Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:41:01.303443Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T17:41:01.303478Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T17:41:01.304416Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:41:01.304416Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:41:01.306412Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T17:41:01.306636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.69:2379"}
	{"level":"info","ts":"2024-08-19T17:42:29.598616Z","caller":"traceutil/trace.go:171","msg":"trace[538643094] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"152.35683ms","start":"2024-08-19T17:42:29.446170Z","end":"2024-08-19T17:42:29.598526Z","steps":["trace[538643094] 'process raft request'  (duration: 152.199839ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:42:44 up 9 min,  0 users,  load average: 0.27, 0.30, 0.16
	Linux multinode-188752 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1] <==
	I0819 17:38:36.198786       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	I0819 17:38:46.199373       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:38:46.199430       1 main.go:299] handling current node
	I0819 17:38:46.199452       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:38:46.199458       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:38:46.199698       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:38:46.199707       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	I0819 17:38:56.193381       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:38:56.193436       1 main.go:299] handling current node
	I0819 17:38:56.193457       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:38:56.193464       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:38:56.193657       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:38:56.193681       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	I0819 17:39:06.202245       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:39:06.202281       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:39:06.202411       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:39:06.202431       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	I0819 17:39:06.202520       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:39:06.202539       1 main.go:299] handling current node
	I0819 17:39:16.198770       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:39:16.198833       1 main.go:299] handling current node
	I0819 17:39:16.198848       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:39:16.198854       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:39:16.198981       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:39:16.198987       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [2e42d64f923ad106c5681e0337d1002718a3d7e697e086fc7275996c921a31f4] <==
	I0819 17:42:04.491855       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	I0819 17:42:14.490936       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:42:14.491124       1 main.go:299] handling current node
	I0819 17:42:14.491166       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:42:14.491195       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:14.491408       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:42:14.491460       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	I0819 17:42:24.490705       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:42:24.490850       1 main.go:299] handling current node
	I0819 17:42:24.490951       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:42:24.490992       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:24.491221       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:42:24.491265       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.2.0/24] 
	I0819 17:42:34.492672       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:42:34.492795       1 main.go:299] handling current node
	I0819 17:42:34.492836       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:42:34.492864       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:42:34.493161       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:42:34.493201       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.2.0/24] 
	I0819 17:42:44.490840       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:42:44.490891       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.2.0/24] 
	I0819 17:42:44.491062       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:42:44.491094       1 main.go:299] handling current node
	I0819 17:42:44.491116       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:42:44.491130       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765] <==
	I0819 17:39:22.846448       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0819 17:39:22.846473       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0819 17:39:22.846500       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0819 17:39:22.846525       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0819 17:39:22.846544       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0819 17:39:22.846557       1 establishing_controller.go:92] Shutting down EstablishingController
	I0819 17:39:22.846634       1 naming_controller.go:305] Shutting down NamingConditionController
	I0819 17:39:22.846657       1 controller.go:170] Shutting down OpenAPI controller
	I0819 17:39:22.846714       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0819 17:39:22.846741       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0819 17:39:22.846765       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0819 17:39:22.846798       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0819 17:39:22.848954       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 17:39:22.849210       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0819 17:39:22.853056       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.853161       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.853249       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.853337       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.853552       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.853932       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.854070       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 17:39:22.854234       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	W0819 17:39:22.854328       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 17:39:22.854400       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0819 17:39:22.856486       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-apiserver [953f745a20681d20faf684c37d4ba7423292c88cb25c5dd0f7eaa0c8f3b29c06] <==
	I0819 17:41:02.567192       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 17:41:02.576295       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 17:41:02.577719       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 17:41:02.578184       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:41:02.578230       1 policy_source.go:224] refreshing policies
	I0819 17:41:02.578504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 17:41:02.578642       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 17:41:02.578900       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 17:41:02.591632       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 17:41:02.594254       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 17:41:02.594282       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 17:41:02.610230       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 17:41:02.617972       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 17:41:02.623537       1 aggregator.go:171] initial CRD sync complete...
	I0819 17:41:02.623598       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 17:41:02.623621       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 17:41:02.623633       1 cache.go:39] Caches are synced for autoregister controller
	I0819 17:41:03.476315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 17:41:04.495794       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 17:41:04.648364       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 17:41:04.667739       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 17:41:04.746294       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 17:41:04.752812       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 17:41:06.077931       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:41:06.178869       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e] <==
	I0819 17:36:57.013196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:36:57.248835       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:36:57.249719       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:36:58.097181       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-188752-m03\" does not exist"
	I0819 17:36:58.097236       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:36:58.115174       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-188752-m03" podCIDRs=["10.244.3.0/24"]
	I0819 17:36:58.115209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:36:58.115252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:36:58.233642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:36:58.550839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:00.537396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:08.375851       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:17.649409       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:37:17.649440       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:17.662476       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:20.477870       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:55.493167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:37:55.493269       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m03"
	I0819 17:37:55.509999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:37:55.544329       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.651337ms"
	I0819 17:37:55.545021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.432µs"
	I0819 17:38:00.546841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:38:00.561487       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:38:00.623160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:38:10.697112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	
	
	==> kube-controller-manager [83da55b56059e42ae778081cc89b18fa2733c79007e2d326522c513e4d705777] <==
	I0819 17:42:02.473118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.132µs"
	I0819 17:42:05.881929       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:42:06.311449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.729915ms"
	I0819 17:42:06.311865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.561µs"
	I0819 17:42:13.600087       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:42:20.059627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:20.078242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:20.293801       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:20.293949       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:42:21.543049       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-188752-m03\" does not exist"
	I0819 17:42:21.543215       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:42:21.570819       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-188752-m03" podCIDRs=["10.244.2.0/24"]
	I0819 17:42:21.570859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	E0819 17:42:21.585379       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-188752-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-188752-m03" podCIDRs=["10.244.3.0/24"]
	E0819 17:42:21.585454       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-188752-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-188752-m03"
	E0819 17:42:21.585516       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-188752-m03': failed to patch node CIDR: Node \"multinode-188752-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0819 17:42:21.585542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:21.591043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:21.813435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:22.143635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:25.984423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:31.888106       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:41.144776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:41.144944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:42:41.154138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	
	
	==> kube-proxy [176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:34:22.138959       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:34:22.151504       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E0819 17:34:22.151611       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:34:22.191752       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:34:22.191821       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:34:22.191990       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:34:22.194204       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:34:22.194482       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:34:22.194506       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:34:22.196447       1 config.go:197] "Starting service config controller"
	I0819 17:34:22.196724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:34:22.196794       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:34:22.196812       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:34:22.197292       1 config.go:326] "Starting node config controller"
	I0819 17:34:22.197314       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:34:22.297701       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:34:22.297748       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:34:22.297808       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [5767babbe0bceb68a428628336811a88b11f3d29d047c7ff6dcfb05f06a46e47] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:41:03.749149       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:41:03.758767       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E0819 17:41:03.758844       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:41:03.821019       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:41:03.821083       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:41:03.821112       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:41:03.827403       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:41:03.827714       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:41:03.827737       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:41:03.832661       1 config.go:197] "Starting service config controller"
	I0819 17:41:03.832747       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:41:03.832828       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:41:03.832843       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:41:03.834401       1 config.go:326] "Starting node config controller"
	I0819 17:41:03.834422       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:41:03.933693       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:41:03.933725       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:41:03.935063       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1b2d87d82d3f8a7de004cd01d22076fd7972000fc3eda41fad9af97730b0a02c] <==
	I0819 17:41:00.349431       1 serving.go:386] Generated self-signed cert in-memory
	W0819 17:41:02.520222       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 17:41:02.520356       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 17:41:02.520393       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 17:41:02.520463       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 17:41:02.606912       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 17:41:02.606950       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:41:02.616534       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 17:41:02.616709       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 17:41:02.616763       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 17:41:02.616788       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 17:41:02.716946       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb] <==
	E0819 17:34:13.510038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:13.510021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:34:13.510157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:13.510171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:34:13.510355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:13.510124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:34:13.510453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:13.518445       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:34:13.518613       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 17:34:14.404197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:34:14.404394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.486155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:34:14.486273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.565765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:34:14.565906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.590521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:34:14.590729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.602947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:34:14.603027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.648044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:34:14.648433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.724154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:34:14.724268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:34:15.094549       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:39:22.826475       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 17:41:09 multinode-188752 kubelet[3063]: E0819 17:41:09.056259    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089269055868232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:09 multinode-188752 kubelet[3063]: E0819 17:41:09.056533    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089269055868232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:19 multinode-188752 kubelet[3063]: E0819 17:41:19.058881    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089279058389259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:19 multinode-188752 kubelet[3063]: E0819 17:41:19.058919    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089279058389259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:29 multinode-188752 kubelet[3063]: E0819 17:41:29.061265    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089289060853528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:29 multinode-188752 kubelet[3063]: E0819 17:41:29.061358    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089289060853528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:39 multinode-188752 kubelet[3063]: E0819 17:41:39.063523    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089299063034689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:39 multinode-188752 kubelet[3063]: E0819 17:41:39.063551    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089299063034689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:49 multinode-188752 kubelet[3063]: E0819 17:41:49.066121    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089309065178335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:49 multinode-188752 kubelet[3063]: E0819 17:41:49.066149    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089309065178335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:59 multinode-188752 kubelet[3063]: E0819 17:41:59.020523    3063 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:41:59 multinode-188752 kubelet[3063]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:41:59 multinode-188752 kubelet[3063]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:41:59 multinode-188752 kubelet[3063]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:41:59 multinode-188752 kubelet[3063]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:41:59 multinode-188752 kubelet[3063]: E0819 17:41:59.068619    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089319068010808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:41:59 multinode-188752 kubelet[3063]: E0819 17:41:59.068654    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089319068010808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:42:09 multinode-188752 kubelet[3063]: E0819 17:42:09.070983    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089329070247278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:42:09 multinode-188752 kubelet[3063]: E0819 17:42:09.071080    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089329070247278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:42:19 multinode-188752 kubelet[3063]: E0819 17:42:19.072895    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089339072425446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:42:19 multinode-188752 kubelet[3063]: E0819 17:42:19.072954    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089339072425446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:42:29 multinode-188752 kubelet[3063]: E0819 17:42:29.075311    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089349074931586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:42:29 multinode-188752 kubelet[3063]: E0819 17:42:29.075756    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089349074931586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:42:39 multinode-188752 kubelet[3063]: E0819 17:42:39.079242    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089359078547156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:42:39 multinode-188752 kubelet[3063]: E0819 17:42:39.079280    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089359078547156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 17:42:43.545635   46895 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19478-10654/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-188752 -n multinode-188752
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-188752 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (325.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 stop
E0819 17:43:15.961006   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-188752 stop: exit status 82 (2m0.471580068s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-188752-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-188752 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-188752 status: exit status 3 (18.654341173s)

                                                
                                                
-- stdout --
	multinode-188752
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-188752-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 17:45:06.605104   47570 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host
	E0819 17:45:06.605139   47570 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-188752 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-188752 -n multinode-188752
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-188752 logs -n 25: (1.359338967s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m02:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752:/home/docker/cp-test_multinode-188752-m02_multinode-188752.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n multinode-188752 sudo cat                                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /home/docker/cp-test_multinode-188752-m02_multinode-188752.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m02:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03:/home/docker/cp-test_multinode-188752-m02_multinode-188752-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n multinode-188752-m03 sudo cat                                   | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /home/docker/cp-test_multinode-188752-m02_multinode-188752-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp testdata/cp-test.txt                                                | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m03:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2485370709/001/cp-test_multinode-188752-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m03:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752:/home/docker/cp-test_multinode-188752-m03_multinode-188752.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n multinode-188752 sudo cat                                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /home/docker/cp-test_multinode-188752-m03_multinode-188752.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-188752 cp multinode-188752-m03:/home/docker/cp-test.txt                       | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m02:/home/docker/cp-test_multinode-188752-m03_multinode-188752-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n                                                                 | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | multinode-188752-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-188752 ssh -n multinode-188752-m02 sudo cat                                   | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	|         | /home/docker/cp-test_multinode-188752-m03_multinode-188752-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-188752 node stop m03                                                          | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:36 UTC |
	| node    | multinode-188752 node start                                                             | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:36 UTC | 19 Aug 24 17:37 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-188752                                                                | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:37 UTC |                     |
	| stop    | -p multinode-188752                                                                     | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:37 UTC |                     |
	| start   | -p multinode-188752                                                                     | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:39 UTC | 19 Aug 24 17:42 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-188752                                                                | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:42 UTC |                     |
	| node    | multinode-188752 node delete                                                            | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:42 UTC | 19 Aug 24 17:42 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-188752 stop                                                                   | multinode-188752 | jenkins | v1.33.1 | 19 Aug 24 17:42 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:39:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:39:21.965809   45795 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:39:21.965910   45795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:39:21.965917   45795 out.go:358] Setting ErrFile to fd 2...
	I0819 17:39:21.965922   45795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:39:21.966090   45795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:39:21.966623   45795 out.go:352] Setting JSON to false
	I0819 17:39:21.967530   45795 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4907,"bootTime":1724084255,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:39:21.967589   45795 start.go:139] virtualization: kvm guest
	I0819 17:39:21.970187   45795 out.go:177] * [multinode-188752] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:39:21.971750   45795 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:39:21.971747   45795 notify.go:220] Checking for updates...
	I0819 17:39:21.974537   45795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:39:21.975877   45795 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:39:21.977146   45795 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:39:21.978467   45795 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:39:21.979760   45795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:39:21.981406   45795 config.go:182] Loaded profile config "multinode-188752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:39:21.981515   45795 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:39:21.981931   45795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:39:21.981982   45795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:39:21.997332   45795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I0819 17:39:21.997750   45795 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:39:21.998269   45795 main.go:141] libmachine: Using API Version  1
	I0819 17:39:21.998292   45795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:39:21.998592   45795 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:39:21.998787   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:39:22.035526   45795 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 17:39:22.036816   45795 start.go:297] selected driver: kvm2
	I0819 17:39:22.036842   45795 start.go:901] validating driver "kvm2" against &{Name:multinode-188752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-188752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:39:22.036969   45795 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:39:22.037264   45795 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:39:22.037337   45795 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:39:22.052193   45795 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:39:22.052975   45795 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:39:22.053036   45795 cni.go:84] Creating CNI manager for ""
	I0819 17:39:22.053047   45795 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 17:39:22.053101   45795 start.go:340] cluster config:
	{Name:multinode-188752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-188752 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:39:22.053241   45795 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:39:22.055954   45795 out.go:177] * Starting "multinode-188752" primary control-plane node in "multinode-188752" cluster
	I0819 17:39:22.057331   45795 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:39:22.057387   45795 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:39:22.057397   45795 cache.go:56] Caching tarball of preloaded images
	I0819 17:39:22.057471   45795 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:39:22.057481   45795 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:39:22.057589   45795 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/config.json ...
	I0819 17:39:22.057790   45795 start.go:360] acquireMachinesLock for multinode-188752: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:39:22.057831   45795 start.go:364] duration metric: took 23.444µs to acquireMachinesLock for "multinode-188752"
	I0819 17:39:22.057849   45795 start.go:96] Skipping create...Using existing machine configuration
	I0819 17:39:22.057860   45795 fix.go:54] fixHost starting: 
	I0819 17:39:22.058105   45795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:39:22.058133   45795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:39:22.072213   45795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0819 17:39:22.072703   45795 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:39:22.073363   45795 main.go:141] libmachine: Using API Version  1
	I0819 17:39:22.073389   45795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:39:22.073737   45795 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:39:22.073912   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:39:22.074078   45795 main.go:141] libmachine: (multinode-188752) Calling .GetState
	I0819 17:39:22.075759   45795 fix.go:112] recreateIfNeeded on multinode-188752: state=Running err=<nil>
	W0819 17:39:22.075777   45795 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 17:39:22.077576   45795 out.go:177] * Updating the running kvm2 "multinode-188752" VM ...
	I0819 17:39:22.078867   45795 machine.go:93] provisionDockerMachine start ...
	I0819 17:39:22.078885   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:39:22.079102   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.081607   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.082078   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.082110   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.082364   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:39:22.082546   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.082730   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.082876   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:39:22.083061   45795 main.go:141] libmachine: Using SSH client type: native
	I0819 17:39:22.083233   45795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0819 17:39:22.083244   45795 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 17:39:22.194515   45795 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-188752
	
	I0819 17:39:22.194548   45795 main.go:141] libmachine: (multinode-188752) Calling .GetMachineName
	I0819 17:39:22.194819   45795 buildroot.go:166] provisioning hostname "multinode-188752"
	I0819 17:39:22.194843   45795 main.go:141] libmachine: (multinode-188752) Calling .GetMachineName
	I0819 17:39:22.195052   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.197662   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.198070   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.198096   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.198229   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:39:22.198400   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.198538   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.198694   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:39:22.198812   45795 main.go:141] libmachine: Using SSH client type: native
	I0819 17:39:22.199017   45795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0819 17:39:22.199033   45795 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-188752 && echo "multinode-188752" | sudo tee /etc/hostname
	I0819 17:39:22.326653   45795 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-188752
	
	I0819 17:39:22.326683   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.329789   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.330145   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.330183   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.330356   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:39:22.330551   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.330735   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.330875   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:39:22.331081   45795 main.go:141] libmachine: Using SSH client type: native
	I0819 17:39:22.331251   45795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0819 17:39:22.331267   45795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-188752' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-188752/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-188752' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:39:22.437774   45795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:39:22.437798   45795 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:39:22.437821   45795 buildroot.go:174] setting up certificates
	I0819 17:39:22.437836   45795 provision.go:84] configureAuth start
	I0819 17:39:22.437847   45795 main.go:141] libmachine: (multinode-188752) Calling .GetMachineName
	I0819 17:39:22.438103   45795 main.go:141] libmachine: (multinode-188752) Calling .GetIP
	I0819 17:39:22.440771   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.441113   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.441139   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.441279   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.443362   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.443668   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.443698   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.443758   45795 provision.go:143] copyHostCerts
	I0819 17:39:22.443784   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:39:22.443831   45795 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:39:22.443852   45795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:39:22.443931   45795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:39:22.444032   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:39:22.444054   45795 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:39:22.444060   45795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:39:22.444099   45795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:39:22.444166   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:39:22.444189   45795 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:39:22.444195   45795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:39:22.444224   45795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:39:22.444290   45795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.multinode-188752 san=[127.0.0.1 192.168.39.69 localhost minikube multinode-188752]
	I0819 17:39:22.547367   45795 provision.go:177] copyRemoteCerts
	I0819 17:39:22.547427   45795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:39:22.547447   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.550340   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.550702   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.550732   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.550882   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:39:22.551084   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.551232   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:39:22.551385   45795 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/multinode-188752/id_rsa Username:docker}
	I0819 17:39:22.634395   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 17:39:22.634455   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:39:22.658822   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 17:39:22.658902   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0819 17:39:22.682120   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 17:39:22.682172   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 17:39:22.705123   45795 provision.go:87] duration metric: took 267.275084ms to configureAuth
	I0819 17:39:22.705151   45795 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:39:22.705360   45795 config.go:182] Loaded profile config "multinode-188752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:39:22.705425   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:39:22.708059   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.708469   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:39:22.708490   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:39:22.708675   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:39:22.708866   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.709007   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:39:22.709176   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:39:22.709344   45795 main.go:141] libmachine: Using SSH client type: native
	I0819 17:39:22.709538   45795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0819 17:39:22.709554   45795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:40:53.432004   45795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:40:53.432037   45795 machine.go:96] duration metric: took 1m31.353159045s to provisionDockerMachine
	I0819 17:40:53.432049   45795 start.go:293] postStartSetup for "multinode-188752" (driver="kvm2")
	I0819 17:40:53.432070   45795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:40:53.432085   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:40:53.432413   45795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:40:53.432444   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:40:53.435583   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.436112   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:53.436140   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.436299   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:40:53.436520   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:40:53.436686   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:40:53.436842   45795 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/multinode-188752/id_rsa Username:docker}
	I0819 17:40:53.522157   45795 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:40:53.526128   45795 command_runner.go:130] > NAME=Buildroot
	I0819 17:40:53.526156   45795 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0819 17:40:53.526164   45795 command_runner.go:130] > ID=buildroot
	I0819 17:40:53.526173   45795 command_runner.go:130] > VERSION_ID=2023.02.9
	I0819 17:40:53.526183   45795 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0819 17:40:53.526224   45795 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:40:53.526238   45795 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:40:53.526314   45795 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:40:53.526411   45795 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:40:53.526423   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /etc/ssl/certs/178372.pem
	I0819 17:40:53.526515   45795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:40:53.535624   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:40:53.558181   45795 start.go:296] duration metric: took 126.118468ms for postStartSetup
	I0819 17:40:53.558234   45795 fix.go:56] duration metric: took 1m31.500376025s for fixHost
	I0819 17:40:53.558260   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:40:53.561123   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.561559   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:53.561589   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.561743   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:40:53.561928   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:40:53.562130   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:40:53.562255   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:40:53.562428   45795 main.go:141] libmachine: Using SSH client type: native
	I0819 17:40:53.562630   45795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0819 17:40:53.562642   45795 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:40:53.669279   45795 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724089253.640714702
	
	I0819 17:40:53.669303   45795 fix.go:216] guest clock: 1724089253.640714702
	I0819 17:40:53.669311   45795 fix.go:229] Guest: 2024-08-19 17:40:53.640714702 +0000 UTC Remote: 2024-08-19 17:40:53.558239836 +0000 UTC m=+91.626880087 (delta=82.474866ms)
	I0819 17:40:53.669346   45795 fix.go:200] guest clock delta is within tolerance: 82.474866ms
	I0819 17:40:53.669352   45795 start.go:83] releasing machines lock for "multinode-188752", held for 1m31.611511852s
	I0819 17:40:53.669369   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:40:53.669629   45795 main.go:141] libmachine: (multinode-188752) Calling .GetIP
	I0819 17:40:53.672342   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.672675   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:53.672722   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.672897   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:40:53.673450   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:40:53.673631   45795 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:40:53.673746   45795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:40:53.673792   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:40:53.673834   45795 ssh_runner.go:195] Run: cat /version.json
	I0819 17:40:53.673857   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:40:53.676393   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.676690   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.676727   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:53.676782   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.676923   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:40:53.677091   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:40:53.677226   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:40:53.677225   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:53.677283   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:53.677395   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:40:53.677392   45795 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/multinode-188752/id_rsa Username:docker}
	I0819 17:40:53.677563   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:40:53.677702   45795 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:40:53.677832   45795 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/multinode-188752/id_rsa Username:docker}
	I0819 17:40:53.796698   45795 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0819 17:40:53.797400   45795 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0819 17:40:53.797547   45795 ssh_runner.go:195] Run: systemctl --version
	I0819 17:40:53.803318   45795 command_runner.go:130] > systemd 252 (252)
	I0819 17:40:53.803350   45795 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0819 17:40:53.803527   45795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:40:53.958940   45795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 17:40:53.965837   45795 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0819 17:40:53.966107   45795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:40:53.966176   45795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:40:53.974992   45795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 17:40:53.975015   45795 start.go:495] detecting cgroup driver to use...
	I0819 17:40:53.975077   45795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:40:53.994417   45795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:40:54.008704   45795 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:40:54.008793   45795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:40:54.022816   45795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:40:54.036949   45795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:40:54.187693   45795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:40:54.324523   45795 docker.go:233] disabling docker service ...
	I0819 17:40:54.324604   45795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:40:54.340685   45795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:40:54.353740   45795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:40:54.487839   45795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:40:54.622892   45795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:40:54.635983   45795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:40:54.653695   45795 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0819 17:40:54.653747   45795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:40:54.653797   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.663925   45795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:40:54.664010   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.673859   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.684317   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.695144   45795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:40:54.705022   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.714693   45795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.725243   45795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:40:54.735241   45795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:40:54.745413   45795 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0819 17:40:54.745484   45795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:40:54.755251   45795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:40:54.886710   45795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:40:56.414632   45795 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.527883252s)
	I0819 17:40:56.414668   45795 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:40:56.414718   45795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:40:56.419013   45795 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0819 17:40:56.419034   45795 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0819 17:40:56.419040   45795 command_runner.go:130] > Device: 0,22	Inode: 1348        Links: 1
	I0819 17:40:56.419047   45795 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 17:40:56.419052   45795 command_runner.go:130] > Access: 2024-08-19 17:40:56.285457337 +0000
	I0819 17:40:56.419058   45795 command_runner.go:130] > Modify: 2024-08-19 17:40:56.285457337 +0000
	I0819 17:40:56.419062   45795 command_runner.go:130] > Change: 2024-08-19 17:40:56.285457337 +0000
	I0819 17:40:56.419066   45795 command_runner.go:130] >  Birth: -
	I0819 17:40:56.419107   45795 start.go:563] Will wait 60s for crictl version
	I0819 17:40:56.419162   45795 ssh_runner.go:195] Run: which crictl
	I0819 17:40:56.422534   45795 command_runner.go:130] > /usr/bin/crictl
	I0819 17:40:56.422647   45795 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:40:56.455570   45795 command_runner.go:130] > Version:  0.1.0
	I0819 17:40:56.455595   45795 command_runner.go:130] > RuntimeName:  cri-o
	I0819 17:40:56.455600   45795 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0819 17:40:56.455605   45795 command_runner.go:130] > RuntimeApiVersion:  v1
	I0819 17:40:56.456641   45795 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:40:56.456721   45795 ssh_runner.go:195] Run: crio --version
	I0819 17:40:56.484584   45795 command_runner.go:130] > crio version 1.29.1
	I0819 17:40:56.484605   45795 command_runner.go:130] > Version:        1.29.1
	I0819 17:40:56.484612   45795 command_runner.go:130] > GitCommit:      unknown
	I0819 17:40:56.484619   45795 command_runner.go:130] > GitCommitDate:  unknown
	I0819 17:40:56.484625   45795 command_runner.go:130] > GitTreeState:   clean
	I0819 17:40:56.484632   45795 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 17:40:56.484639   45795 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 17:40:56.484645   45795 command_runner.go:130] > Compiler:       gc
	I0819 17:40:56.484652   45795 command_runner.go:130] > Platform:       linux/amd64
	I0819 17:40:56.484658   45795 command_runner.go:130] > Linkmode:       dynamic
	I0819 17:40:56.484669   45795 command_runner.go:130] > BuildTags:      
	I0819 17:40:56.484676   45795 command_runner.go:130] >   containers_image_ostree_stub
	I0819 17:40:56.484680   45795 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 17:40:56.484684   45795 command_runner.go:130] >   btrfs_noversion
	I0819 17:40:56.484689   45795 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 17:40:56.484697   45795 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 17:40:56.484701   45795 command_runner.go:130] >   seccomp
	I0819 17:40:56.484705   45795 command_runner.go:130] > LDFlags:          unknown
	I0819 17:40:56.484710   45795 command_runner.go:130] > SeccompEnabled:   true
	I0819 17:40:56.484714   45795 command_runner.go:130] > AppArmorEnabled:  false
	I0819 17:40:56.484796   45795 ssh_runner.go:195] Run: crio --version
	I0819 17:40:56.510072   45795 command_runner.go:130] > crio version 1.29.1
	I0819 17:40:56.510092   45795 command_runner.go:130] > Version:        1.29.1
	I0819 17:40:56.510098   45795 command_runner.go:130] > GitCommit:      unknown
	I0819 17:40:56.510102   45795 command_runner.go:130] > GitCommitDate:  unknown
	I0819 17:40:56.510106   45795 command_runner.go:130] > GitTreeState:   clean
	I0819 17:40:56.510112   45795 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0819 17:40:56.510115   45795 command_runner.go:130] > GoVersion:      go1.21.6
	I0819 17:40:56.510119   45795 command_runner.go:130] > Compiler:       gc
	I0819 17:40:56.510124   45795 command_runner.go:130] > Platform:       linux/amd64
	I0819 17:40:56.510128   45795 command_runner.go:130] > Linkmode:       dynamic
	I0819 17:40:56.510145   45795 command_runner.go:130] > BuildTags:      
	I0819 17:40:56.510151   45795 command_runner.go:130] >   containers_image_ostree_stub
	I0819 17:40:56.510156   45795 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0819 17:40:56.510162   45795 command_runner.go:130] >   btrfs_noversion
	I0819 17:40:56.510167   45795 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0819 17:40:56.510171   45795 command_runner.go:130] >   libdm_no_deferred_remove
	I0819 17:40:56.510175   45795 command_runner.go:130] >   seccomp
	I0819 17:40:56.510179   45795 command_runner.go:130] > LDFlags:          unknown
	I0819 17:40:56.510182   45795 command_runner.go:130] > SeccompEnabled:   true
	I0819 17:40:56.510189   45795 command_runner.go:130] > AppArmorEnabled:  false
	I0819 17:40:56.513324   45795 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 17:40:56.514662   45795 main.go:141] libmachine: (multinode-188752) Calling .GetIP
	I0819 17:40:56.517552   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:56.517909   45795 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:40:56.517933   45795 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:40:56.518162   45795 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:40:56.521925   45795 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0819 17:40:56.522022   45795 kubeadm.go:883] updating cluster {Name:multinode-188752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-188752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:40:56.522170   45795 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:40:56.522216   45795 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:40:56.565579   45795 command_runner.go:130] > {
	I0819 17:40:56.565606   45795 command_runner.go:130] >   "images": [
	I0819 17:40:56.565618   45795 command_runner.go:130] >     {
	I0819 17:40:56.565627   45795 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 17:40:56.565632   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.565644   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 17:40:56.565652   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565660   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.565676   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 17:40:56.565691   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 17:40:56.565696   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565702   45795 command_runner.go:130] >       "size": "87165492",
	I0819 17:40:56.565705   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.565710   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.565715   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.565720   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.565724   45795 command_runner.go:130] >     },
	I0819 17:40:56.565729   45795 command_runner.go:130] >     {
	I0819 17:40:56.565739   45795 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 17:40:56.565752   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.565762   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 17:40:56.565771   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565779   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.565795   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 17:40:56.565806   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 17:40:56.565813   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565820   45795 command_runner.go:130] >       "size": "87190579",
	I0819 17:40:56.565831   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.565848   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.565858   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.565871   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.565880   45795 command_runner.go:130] >     },
	I0819 17:40:56.565890   45795 command_runner.go:130] >     {
	I0819 17:40:56.565901   45795 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 17:40:56.565912   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.565925   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 17:40:56.565935   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565942   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.565968   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 17:40:56.565982   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 17:40:56.565988   45795 command_runner.go:130] >       ],
	I0819 17:40:56.565998   45795 command_runner.go:130] >       "size": "1363676",
	I0819 17:40:56.566010   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.566021   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.566036   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566044   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566053   45795 command_runner.go:130] >     },
	I0819 17:40:56.566063   45795 command_runner.go:130] >     {
	I0819 17:40:56.566074   45795 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 17:40:56.566085   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566098   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 17:40:56.566109   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566120   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566136   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 17:40:56.566159   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 17:40:56.566170   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566182   45795 command_runner.go:130] >       "size": "31470524",
	I0819 17:40:56.566192   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.566203   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.566211   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566225   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566235   45795 command_runner.go:130] >     },
	I0819 17:40:56.566244   45795 command_runner.go:130] >     {
	I0819 17:40:56.566255   45795 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 17:40:56.566266   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566279   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 17:40:56.566289   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566297   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566318   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 17:40:56.566330   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 17:40:56.566339   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566351   45795 command_runner.go:130] >       "size": "61245718",
	I0819 17:40:56.566366   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.566378   45795 command_runner.go:130] >       "username": "nonroot",
	I0819 17:40:56.566392   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566410   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566419   45795 command_runner.go:130] >     },
	I0819 17:40:56.566425   45795 command_runner.go:130] >     {
	I0819 17:40:56.566439   45795 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 17:40:56.566450   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566458   45795 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 17:40:56.566467   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566475   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566490   45795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 17:40:56.566502   45795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 17:40:56.566507   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566519   45795 command_runner.go:130] >       "size": "149009664",
	I0819 17:40:56.566529   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.566540   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.566550   45795 command_runner.go:130] >       },
	I0819 17:40:56.566560   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.566573   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566585   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566591   45795 command_runner.go:130] >     },
	I0819 17:40:56.566601   45795 command_runner.go:130] >     {
	I0819 17:40:56.566612   45795 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 17:40:56.566622   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566634   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 17:40:56.566644   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566655   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566668   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 17:40:56.566683   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 17:40:56.566693   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566700   45795 command_runner.go:130] >       "size": "95233506",
	I0819 17:40:56.566707   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.566714   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.566724   45795 command_runner.go:130] >       },
	I0819 17:40:56.566730   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.566741   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566751   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566768   45795 command_runner.go:130] >     },
	I0819 17:40:56.566778   45795 command_runner.go:130] >     {
	I0819 17:40:56.566792   45795 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 17:40:56.566802   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566812   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 17:40:56.566821   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566832   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566865   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 17:40:56.566879   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 17:40:56.566882   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566886   45795 command_runner.go:130] >       "size": "89437512",
	I0819 17:40:56.566891   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.566895   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.566901   45795 command_runner.go:130] >       },
	I0819 17:40:56.566906   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.566911   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.566917   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.566923   45795 command_runner.go:130] >     },
	I0819 17:40:56.566928   45795 command_runner.go:130] >     {
	I0819 17:40:56.566938   45795 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 17:40:56.566945   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.566952   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 17:40:56.566959   45795 command_runner.go:130] >       ],
	I0819 17:40:56.566965   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.566977   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 17:40:56.566989   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 17:40:56.566995   45795 command_runner.go:130] >       ],
	I0819 17:40:56.567002   45795 command_runner.go:130] >       "size": "92728217",
	I0819 17:40:56.567009   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.567020   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.567031   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.567042   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.567052   45795 command_runner.go:130] >     },
	I0819 17:40:56.567062   45795 command_runner.go:130] >     {
	I0819 17:40:56.567075   45795 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 17:40:56.567083   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.567096   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 17:40:56.567103   45795 command_runner.go:130] >       ],
	I0819 17:40:56.567107   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.567117   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 17:40:56.567127   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 17:40:56.567133   45795 command_runner.go:130] >       ],
	I0819 17:40:56.567138   45795 command_runner.go:130] >       "size": "68420936",
	I0819 17:40:56.567144   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.567148   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.567154   45795 command_runner.go:130] >       },
	I0819 17:40:56.567158   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.567165   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.567169   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.567175   45795 command_runner.go:130] >     },
	I0819 17:40:56.567179   45795 command_runner.go:130] >     {
	I0819 17:40:56.567187   45795 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 17:40:56.567194   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.567199   45795 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 17:40:56.567205   45795 command_runner.go:130] >       ],
	I0819 17:40:56.567209   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.567216   45795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 17:40:56.567225   45795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 17:40:56.567231   45795 command_runner.go:130] >       ],
	I0819 17:40:56.567236   45795 command_runner.go:130] >       "size": "742080",
	I0819 17:40:56.567242   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.567246   45795 command_runner.go:130] >         "value": "65535"
	I0819 17:40:56.567253   45795 command_runner.go:130] >       },
	I0819 17:40:56.567257   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.567263   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.567267   45795 command_runner.go:130] >       "pinned": true
	I0819 17:40:56.567273   45795 command_runner.go:130] >     }
	I0819 17:40:56.567277   45795 command_runner.go:130] >   ]
	I0819 17:40:56.567283   45795 command_runner.go:130] > }
	I0819 17:40:56.567522   45795 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:40:56.567536   45795 crio.go:433] Images already preloaded, skipping extraction
	I0819 17:40:56.567614   45795 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:40:56.599944   45795 command_runner.go:130] > {
	I0819 17:40:56.599965   45795 command_runner.go:130] >   "images": [
	I0819 17:40:56.599969   45795 command_runner.go:130] >     {
	I0819 17:40:56.599980   45795 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0819 17:40:56.599986   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.599991   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0819 17:40:56.599995   45795 command_runner.go:130] >       ],
	I0819 17:40:56.599999   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600007   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0819 17:40:56.600014   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0819 17:40:56.600018   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600023   45795 command_runner.go:130] >       "size": "87165492",
	I0819 17:40:56.600027   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600034   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600040   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600045   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600048   45795 command_runner.go:130] >     },
	I0819 17:40:56.600052   45795 command_runner.go:130] >     {
	I0819 17:40:56.600058   45795 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0819 17:40:56.600062   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600068   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0819 17:40:56.600072   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600077   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600084   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0819 17:40:56.600093   45795 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0819 17:40:56.600097   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600100   45795 command_runner.go:130] >       "size": "87190579",
	I0819 17:40:56.600104   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600111   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600123   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600129   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600133   45795 command_runner.go:130] >     },
	I0819 17:40:56.600138   45795 command_runner.go:130] >     {
	I0819 17:40:56.600144   45795 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0819 17:40:56.600148   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600153   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0819 17:40:56.600157   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600161   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600170   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0819 17:40:56.600178   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0819 17:40:56.600183   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600188   45795 command_runner.go:130] >       "size": "1363676",
	I0819 17:40:56.600194   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600198   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600206   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600213   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600217   45795 command_runner.go:130] >     },
	I0819 17:40:56.600220   45795 command_runner.go:130] >     {
	I0819 17:40:56.600225   45795 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0819 17:40:56.600232   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600238   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0819 17:40:56.600243   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600249   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600259   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0819 17:40:56.600273   45795 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0819 17:40:56.600280   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600284   45795 command_runner.go:130] >       "size": "31470524",
	I0819 17:40:56.600290   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600294   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600300   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600304   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600309   45795 command_runner.go:130] >     },
	I0819 17:40:56.600315   45795 command_runner.go:130] >     {
	I0819 17:40:56.600326   45795 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0819 17:40:56.600334   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600344   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0819 17:40:56.600350   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600354   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600363   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0819 17:40:56.600373   45795 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0819 17:40:56.600376   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600380   45795 command_runner.go:130] >       "size": "61245718",
	I0819 17:40:56.600384   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600388   45795 command_runner.go:130] >       "username": "nonroot",
	I0819 17:40:56.600394   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600398   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600403   45795 command_runner.go:130] >     },
	I0819 17:40:56.600407   45795 command_runner.go:130] >     {
	I0819 17:40:56.600415   45795 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0819 17:40:56.600421   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600426   45795 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0819 17:40:56.600432   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600436   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600451   45795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0819 17:40:56.600460   45795 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0819 17:40:56.600466   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600470   45795 command_runner.go:130] >       "size": "149009664",
	I0819 17:40:56.600475   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.600479   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.600485   45795 command_runner.go:130] >       },
	I0819 17:40:56.600491   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600495   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600501   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600505   45795 command_runner.go:130] >     },
	I0819 17:40:56.600510   45795 command_runner.go:130] >     {
	I0819 17:40:56.600516   45795 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0819 17:40:56.600522   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600527   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0819 17:40:56.600533   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600537   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600546   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0819 17:40:56.600569   45795 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0819 17:40:56.600576   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600581   45795 command_runner.go:130] >       "size": "95233506",
	I0819 17:40:56.600586   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.600589   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.600592   45795 command_runner.go:130] >       },
	I0819 17:40:56.600596   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600599   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600603   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600609   45795 command_runner.go:130] >     },
	I0819 17:40:56.600613   45795 command_runner.go:130] >     {
	I0819 17:40:56.600621   45795 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0819 17:40:56.600625   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600630   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0819 17:40:56.600634   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600638   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600661   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0819 17:40:56.600671   45795 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0819 17:40:56.600677   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600681   45795 command_runner.go:130] >       "size": "89437512",
	I0819 17:40:56.600687   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.600691   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.600696   45795 command_runner.go:130] >       },
	I0819 17:40:56.600700   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600704   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600710   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600713   45795 command_runner.go:130] >     },
	I0819 17:40:56.600719   45795 command_runner.go:130] >     {
	I0819 17:40:56.600724   45795 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0819 17:40:56.600730   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600735   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0819 17:40:56.600740   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600744   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600767   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0819 17:40:56.600779   45795 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0819 17:40:56.600784   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600793   45795 command_runner.go:130] >       "size": "92728217",
	I0819 17:40:56.600799   45795 command_runner.go:130] >       "uid": null,
	I0819 17:40:56.600804   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600810   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600814   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600819   45795 command_runner.go:130] >     },
	I0819 17:40:56.600823   45795 command_runner.go:130] >     {
	I0819 17:40:56.600831   45795 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0819 17:40:56.600838   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600843   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0819 17:40:56.600848   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600852   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600861   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0819 17:40:56.600870   45795 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0819 17:40:56.600875   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600879   45795 command_runner.go:130] >       "size": "68420936",
	I0819 17:40:56.600885   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.600889   45795 command_runner.go:130] >         "value": "0"
	I0819 17:40:56.600895   45795 command_runner.go:130] >       },
	I0819 17:40:56.600898   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600905   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.600909   45795 command_runner.go:130] >       "pinned": false
	I0819 17:40:56.600913   45795 command_runner.go:130] >     },
	I0819 17:40:56.600916   45795 command_runner.go:130] >     {
	I0819 17:40:56.600924   45795 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0819 17:40:56.600928   45795 command_runner.go:130] >       "repoTags": [
	I0819 17:40:56.600935   45795 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0819 17:40:56.600938   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600942   45795 command_runner.go:130] >       "repoDigests": [
	I0819 17:40:56.600950   45795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0819 17:40:56.600957   45795 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0819 17:40:56.600963   45795 command_runner.go:130] >       ],
	I0819 17:40:56.600966   45795 command_runner.go:130] >       "size": "742080",
	I0819 17:40:56.600970   45795 command_runner.go:130] >       "uid": {
	I0819 17:40:56.600974   45795 command_runner.go:130] >         "value": "65535"
	I0819 17:40:56.600977   45795 command_runner.go:130] >       },
	I0819 17:40:56.600990   45795 command_runner.go:130] >       "username": "",
	I0819 17:40:56.600997   45795 command_runner.go:130] >       "spec": null,
	I0819 17:40:56.601003   45795 command_runner.go:130] >       "pinned": true
	I0819 17:40:56.601009   45795 command_runner.go:130] >     }
	I0819 17:40:56.601017   45795 command_runner.go:130] >   ]
	I0819 17:40:56.601023   45795 command_runner.go:130] > }
	I0819 17:40:56.601151   45795 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:40:56.601163   45795 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:40:56.601170   45795 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.31.0 crio true true} ...
	I0819 17:40:56.601259   45795 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-188752 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-188752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:40:56.601323   45795 ssh_runner.go:195] Run: crio config
	I0819 17:40:56.644531   45795 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0819 17:40:56.644573   45795 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0819 17:40:56.644592   45795 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0819 17:40:56.644597   45795 command_runner.go:130] > #
	I0819 17:40:56.644608   45795 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0819 17:40:56.644618   45795 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0819 17:40:56.644631   45795 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0819 17:40:56.644648   45795 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0819 17:40:56.644658   45795 command_runner.go:130] > # reload'.
	I0819 17:40:56.644668   45795 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0819 17:40:56.644678   45795 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0819 17:40:56.644691   45795 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0819 17:40:56.644703   45795 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0819 17:40:56.644713   45795 command_runner.go:130] > [crio]
	I0819 17:40:56.644723   45795 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0819 17:40:56.644733   45795 command_runner.go:130] > # containers images, in this directory.
	I0819 17:40:56.644745   45795 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0819 17:40:56.644778   45795 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0819 17:40:56.644789   45795 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0819 17:40:56.644801   45795 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0819 17:40:56.644957   45795 command_runner.go:130] > # imagestore = ""
	I0819 17:40:56.644975   45795 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0819 17:40:56.644982   45795 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0819 17:40:56.645063   45795 command_runner.go:130] > storage_driver = "overlay"
	I0819 17:40:56.645093   45795 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0819 17:40:56.645106   45795 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0819 17:40:56.645115   45795 command_runner.go:130] > storage_option = [
	I0819 17:40:56.645438   45795 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0819 17:40:56.645446   45795 command_runner.go:130] > ]
	I0819 17:40:56.645452   45795 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0819 17:40:56.645467   45795 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0819 17:40:56.645477   45795 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0819 17:40:56.645486   45795 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0819 17:40:56.645498   45795 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0819 17:40:56.645505   45795 command_runner.go:130] > # always happen on a node reboot
	I0819 17:40:56.645510   45795 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0819 17:40:56.645537   45795 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0819 17:40:56.645547   45795 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0819 17:40:56.645556   45795 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0819 17:40:56.645567   45795 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0819 17:40:56.645579   45795 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0819 17:40:56.645594   45795 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0819 17:40:56.645603   45795 command_runner.go:130] > # internal_wipe = true
	I0819 17:40:56.645618   45795 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0819 17:40:56.645636   45795 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0819 17:40:56.645643   45795 command_runner.go:130] > # internal_repair = false
	I0819 17:40:56.645648   45795 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0819 17:40:56.645656   45795 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0819 17:40:56.645661   45795 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0819 17:40:56.645667   45795 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0819 17:40:56.645672   45795 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0819 17:40:56.645679   45795 command_runner.go:130] > [crio.api]
	I0819 17:40:56.645685   45795 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0819 17:40:56.645695   45795 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0819 17:40:56.645703   45795 command_runner.go:130] > # IP address on which the stream server will listen.
	I0819 17:40:56.645713   45795 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0819 17:40:56.645723   45795 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0819 17:40:56.645734   45795 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0819 17:40:56.645743   45795 command_runner.go:130] > # stream_port = "0"
	I0819 17:40:56.645752   45795 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0819 17:40:56.645759   45795 command_runner.go:130] > # stream_enable_tls = false
	I0819 17:40:56.645765   45795 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0819 17:40:56.645771   45795 command_runner.go:130] > # stream_idle_timeout = ""
	I0819 17:40:56.645777   45795 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0819 17:40:56.645785   45795 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0819 17:40:56.645789   45795 command_runner.go:130] > # minutes.
	I0819 17:40:56.645795   45795 command_runner.go:130] > # stream_tls_cert = ""
	I0819 17:40:56.645808   45795 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0819 17:40:56.645822   45795 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0819 17:40:56.645828   45795 command_runner.go:130] > # stream_tls_key = ""
	I0819 17:40:56.645837   45795 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0819 17:40:56.645850   45795 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0819 17:40:56.645874   45795 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0819 17:40:56.645883   45795 command_runner.go:130] > # stream_tls_ca = ""
	I0819 17:40:56.645898   45795 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 17:40:56.645908   45795 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0819 17:40:56.645921   45795 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0819 17:40:56.645932   45795 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0819 17:40:56.645942   45795 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0819 17:40:56.645953   45795 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0819 17:40:56.645968   45795 command_runner.go:130] > [crio.runtime]
	I0819 17:40:56.645977   45795 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0819 17:40:56.645983   45795 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0819 17:40:56.645990   45795 command_runner.go:130] > # "nofile=1024:2048"
	I0819 17:40:56.645999   45795 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0819 17:40:56.646008   45795 command_runner.go:130] > # default_ulimits = [
	I0819 17:40:56.646014   45795 command_runner.go:130] > # ]
	I0819 17:40:56.646027   45795 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0819 17:40:56.646035   45795 command_runner.go:130] > # no_pivot = false
	I0819 17:40:56.646044   45795 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0819 17:40:56.646056   45795 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0819 17:40:56.646069   45795 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0819 17:40:56.646077   45795 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0819 17:40:56.646088   45795 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0819 17:40:56.646106   45795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 17:40:56.646116   45795 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0819 17:40:56.646126   45795 command_runner.go:130] > # Cgroup setting for conmon
	I0819 17:40:56.646137   45795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0819 17:40:56.646147   45795 command_runner.go:130] > conmon_cgroup = "pod"
	I0819 17:40:56.646157   45795 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0819 17:40:56.646164   45795 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0819 17:40:56.646172   45795 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0819 17:40:56.646181   45795 command_runner.go:130] > conmon_env = [
	I0819 17:40:56.646190   45795 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 17:40:56.646201   45795 command_runner.go:130] > ]
	I0819 17:40:56.646209   45795 command_runner.go:130] > # Additional environment variables to set for all the
	I0819 17:40:56.646221   45795 command_runner.go:130] > # containers. These are overridden if set in the
	I0819 17:40:56.646232   45795 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0819 17:40:56.646240   45795 command_runner.go:130] > # default_env = [
	I0819 17:40:56.646246   45795 command_runner.go:130] > # ]
	I0819 17:40:56.646257   45795 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0819 17:40:56.646271   45795 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0819 17:40:56.646280   45795 command_runner.go:130] > # selinux = false
	I0819 17:40:56.646289   45795 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0819 17:40:56.646302   45795 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0819 17:40:56.646319   45795 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0819 17:40:56.646337   45795 command_runner.go:130] > # seccomp_profile = ""
	I0819 17:40:56.646349   45795 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0819 17:40:56.646361   45795 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0819 17:40:56.646373   45795 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0819 17:40:56.646384   45795 command_runner.go:130] > # which might increase security.
	I0819 17:40:56.646392   45795 command_runner.go:130] > # This option is currently deprecated,
	I0819 17:40:56.646403   45795 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0819 17:40:56.646413   45795 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0819 17:40:56.646423   45795 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0819 17:40:56.646439   45795 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0819 17:40:56.646455   45795 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0819 17:40:56.646510   45795 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0819 17:40:56.646537   45795 command_runner.go:130] > # This option supports live configuration reload.
	I0819 17:40:56.646546   45795 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0819 17:40:56.646559   45795 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0819 17:40:56.646571   45795 command_runner.go:130] > # the cgroup blockio controller.
	I0819 17:40:56.646580   45795 command_runner.go:130] > # blockio_config_file = ""
	I0819 17:40:56.646591   45795 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0819 17:40:56.646602   45795 command_runner.go:130] > # blockio parameters.
	I0819 17:40:56.646615   45795 command_runner.go:130] > # blockio_reload = false
	I0819 17:40:56.646626   45795 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0819 17:40:56.646634   45795 command_runner.go:130] > # irqbalance daemon.
	I0819 17:40:56.646646   45795 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0819 17:40:56.646660   45795 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0819 17:40:56.646678   45795 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0819 17:40:56.646690   45795 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0819 17:40:56.646711   45795 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0819 17:40:56.646724   45795 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0819 17:40:56.646736   45795 command_runner.go:130] > # This option supports live configuration reload.
	I0819 17:40:56.646748   45795 command_runner.go:130] > # rdt_config_file = ""
	I0819 17:40:56.646761   45795 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0819 17:40:56.646773   45795 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0819 17:40:56.646819   45795 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0819 17:40:56.646831   45795 command_runner.go:130] > # separate_pull_cgroup = ""
	I0819 17:40:56.646846   45795 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0819 17:40:56.646857   45795 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0819 17:40:56.646875   45795 command_runner.go:130] > # will be added.
	I0819 17:40:56.646887   45795 command_runner.go:130] > # default_capabilities = [
	I0819 17:40:56.646897   45795 command_runner.go:130] > # 	"CHOWN",
	I0819 17:40:56.646905   45795 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0819 17:40:56.646915   45795 command_runner.go:130] > # 	"FSETID",
	I0819 17:40:56.646922   45795 command_runner.go:130] > # 	"FOWNER",
	I0819 17:40:56.646932   45795 command_runner.go:130] > # 	"SETGID",
	I0819 17:40:56.646944   45795 command_runner.go:130] > # 	"SETUID",
	I0819 17:40:56.646955   45795 command_runner.go:130] > # 	"SETPCAP",
	I0819 17:40:56.646963   45795 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0819 17:40:56.646973   45795 command_runner.go:130] > # 	"KILL",
	I0819 17:40:56.646980   45795 command_runner.go:130] > # ]
	I0819 17:40:56.646993   45795 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0819 17:40:56.647006   45795 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0819 17:40:56.647017   45795 command_runner.go:130] > # add_inheritable_capabilities = false
	I0819 17:40:56.647032   45795 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0819 17:40:56.647046   45795 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 17:40:56.647057   45795 command_runner.go:130] > default_sysctls = [
	I0819 17:40:56.647067   45795 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0819 17:40:56.647073   45795 command_runner.go:130] > ]
	I0819 17:40:56.647084   45795 command_runner.go:130] > # List of devices on the host that a
	I0819 17:40:56.647098   45795 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0819 17:40:56.647109   45795 command_runner.go:130] > # allowed_devices = [
	I0819 17:40:56.647115   45795 command_runner.go:130] > # 	"/dev/fuse",
	I0819 17:40:56.647122   45795 command_runner.go:130] > # ]
	I0819 17:40:56.647130   45795 command_runner.go:130] > # List of additional devices. specified as
	I0819 17:40:56.647146   45795 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0819 17:40:56.647159   45795 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0819 17:40:56.647173   45795 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0819 17:40:56.647185   45795 command_runner.go:130] > # additional_devices = [
	I0819 17:40:56.647194   45795 command_runner.go:130] > # ]
	I0819 17:40:56.647204   45795 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0819 17:40:56.647219   45795 command_runner.go:130] > # cdi_spec_dirs = [
	I0819 17:40:56.647230   45795 command_runner.go:130] > # 	"/etc/cdi",
	I0819 17:40:56.647238   45795 command_runner.go:130] > # 	"/var/run/cdi",
	I0819 17:40:56.647248   45795 command_runner.go:130] > # ]
	I0819 17:40:56.647271   45795 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0819 17:40:56.647285   45795 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0819 17:40:56.647295   45795 command_runner.go:130] > # Defaults to false.
	I0819 17:40:56.647328   45795 command_runner.go:130] > # device_ownership_from_security_context = false
	I0819 17:40:56.647342   45795 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0819 17:40:56.647353   45795 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0819 17:40:56.647363   45795 command_runner.go:130] > # hooks_dir = [
	I0819 17:40:56.647372   45795 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0819 17:40:56.647378   45795 command_runner.go:130] > # ]
	I0819 17:40:56.647392   45795 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0819 17:40:56.647406   45795 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0819 17:40:56.647419   45795 command_runner.go:130] > # its default mounts from the following two files:
	I0819 17:40:56.647428   45795 command_runner.go:130] > #
	I0819 17:40:56.647439   45795 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0819 17:40:56.647452   45795 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0819 17:40:56.647466   45795 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0819 17:40:56.647475   45795 command_runner.go:130] > #
	I0819 17:40:56.647493   45795 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0819 17:40:56.647507   45795 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0819 17:40:56.647519   45795 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0819 17:40:56.647531   45795 command_runner.go:130] > #      only add mounts it finds in this file.
	I0819 17:40:56.647540   45795 command_runner.go:130] > #
	I0819 17:40:56.647549   45795 command_runner.go:130] > # default_mounts_file = ""
	I0819 17:40:56.647562   45795 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0819 17:40:56.647572   45795 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0819 17:40:56.647583   45795 command_runner.go:130] > pids_limit = 1024
	I0819 17:40:56.647593   45795 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0819 17:40:56.647607   45795 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0819 17:40:56.647621   45795 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0819 17:40:56.647637   45795 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0819 17:40:56.647648   45795 command_runner.go:130] > # log_size_max = -1
	I0819 17:40:56.647662   45795 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0819 17:40:56.647670   45795 command_runner.go:130] > # log_to_journald = false
	I0819 17:40:56.647691   45795 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0819 17:40:56.647703   45795 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0819 17:40:56.647719   45795 command_runner.go:130] > # Path to directory for container attach sockets.
	I0819 17:40:56.647739   45795 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0819 17:40:56.647758   45795 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0819 17:40:56.647767   45795 command_runner.go:130] > # bind_mount_prefix = ""
	I0819 17:40:56.647778   45795 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0819 17:40:56.647789   45795 command_runner.go:130] > # read_only = false
	I0819 17:40:56.647799   45795 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0819 17:40:56.647813   45795 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0819 17:40:56.647832   45795 command_runner.go:130] > # live configuration reload.
	I0819 17:40:56.647843   45795 command_runner.go:130] > # log_level = "info"
	I0819 17:40:56.647866   45795 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0819 17:40:56.647875   45795 command_runner.go:130] > # This option supports live configuration reload.
	I0819 17:40:56.647885   45795 command_runner.go:130] > # log_filter = ""
	I0819 17:40:56.647896   45795 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0819 17:40:56.647909   45795 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0819 17:40:56.647919   45795 command_runner.go:130] > # separated by comma.
	I0819 17:40:56.647931   45795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 17:40:56.647942   45795 command_runner.go:130] > # uid_mappings = ""
	I0819 17:40:56.647951   45795 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0819 17:40:56.647962   45795 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0819 17:40:56.647976   45795 command_runner.go:130] > # separated by comma.
	I0819 17:40:56.647992   45795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 17:40:56.648002   45795 command_runner.go:130] > # gid_mappings = ""
	I0819 17:40:56.648013   45795 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0819 17:40:56.648026   45795 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 17:40:56.648038   45795 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 17:40:56.648055   45795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 17:40:56.648066   45795 command_runner.go:130] > # minimum_mappable_uid = -1
	I0819 17:40:56.648076   45795 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0819 17:40:56.648089   45795 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0819 17:40:56.648103   45795 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0819 17:40:56.648119   45795 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0819 17:40:56.648130   45795 command_runner.go:130] > # minimum_mappable_gid = -1
	I0819 17:40:56.648141   45795 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0819 17:40:56.648154   45795 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0819 17:40:56.648164   45795 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0819 17:40:56.648173   45795 command_runner.go:130] > # ctr_stop_timeout = 30
	I0819 17:40:56.648187   45795 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0819 17:40:56.648196   45795 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0819 17:40:56.648203   45795 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0819 17:40:56.648209   45795 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0819 17:40:56.648220   45795 command_runner.go:130] > drop_infra_ctr = false
	I0819 17:40:56.648233   45795 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0819 17:40:56.648246   45795 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0819 17:40:56.648261   45795 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0819 17:40:56.648273   45795 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0819 17:40:56.648285   45795 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0819 17:40:56.648297   45795 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0819 17:40:56.648311   45795 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0819 17:40:56.648316   45795 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0819 17:40:56.648323   45795 command_runner.go:130] > # shared_cpuset = ""
	I0819 17:40:56.648328   45795 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0819 17:40:56.648333   45795 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0819 17:40:56.648340   45795 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0819 17:40:56.648347   45795 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0819 17:40:56.648354   45795 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0819 17:40:56.648360   45795 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0819 17:40:56.648368   45795 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0819 17:40:56.648372   45795 command_runner.go:130] > # enable_criu_support = false
	I0819 17:40:56.648377   45795 command_runner.go:130] > # Enable/disable the generation of the container,
	I0819 17:40:56.648385   45795 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0819 17:40:56.648392   45795 command_runner.go:130] > # enable_pod_events = false
	I0819 17:40:56.648398   45795 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 17:40:56.648406   45795 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0819 17:40:56.648412   45795 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0819 17:40:56.648419   45795 command_runner.go:130] > # default_runtime = "runc"
	I0819 17:40:56.648424   45795 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0819 17:40:56.648431   45795 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0819 17:40:56.648442   45795 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0819 17:40:56.648450   45795 command_runner.go:130] > # creation as a file is not desired either.
	I0819 17:40:56.648457   45795 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0819 17:40:56.648468   45795 command_runner.go:130] > # the hostname is being managed dynamically.
	I0819 17:40:56.648475   45795 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0819 17:40:56.648486   45795 command_runner.go:130] > # ]
	I0819 17:40:56.648495   45795 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0819 17:40:56.648504   45795 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0819 17:40:56.648512   45795 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0819 17:40:56.648518   45795 command_runner.go:130] > # Each entry in the table should follow the format:
	I0819 17:40:56.648523   45795 command_runner.go:130] > #
	I0819 17:40:56.648528   45795 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0819 17:40:56.648535   45795 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0819 17:40:56.648593   45795 command_runner.go:130] > # runtime_type = "oci"
	I0819 17:40:56.648601   45795 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0819 17:40:56.648606   45795 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0819 17:40:56.648611   45795 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0819 17:40:56.648618   45795 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0819 17:40:56.648622   45795 command_runner.go:130] > # monitor_env = []
	I0819 17:40:56.648629   45795 command_runner.go:130] > # privileged_without_host_devices = false
	I0819 17:40:56.648634   45795 command_runner.go:130] > # allowed_annotations = []
	I0819 17:40:56.648641   45795 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0819 17:40:56.648648   45795 command_runner.go:130] > # Where:
	I0819 17:40:56.648655   45795 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0819 17:40:56.648663   45795 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0819 17:40:56.648672   45795 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0819 17:40:56.648680   45795 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0819 17:40:56.648686   45795 command_runner.go:130] > #   in $PATH.
	I0819 17:40:56.648692   45795 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0819 17:40:56.648697   45795 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0819 17:40:56.648703   45795 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0819 17:40:56.648709   45795 command_runner.go:130] > #   state.
	I0819 17:40:56.648715   45795 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0819 17:40:56.648723   45795 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0819 17:40:56.648732   45795 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0819 17:40:56.648740   45795 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0819 17:40:56.648784   45795 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0819 17:40:56.648796   45795 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0819 17:40:56.648801   45795 command_runner.go:130] > #   The currently recognized values are:
	I0819 17:40:56.648809   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0819 17:40:56.648818   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0819 17:40:56.648834   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0819 17:40:56.648844   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0819 17:40:56.648854   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0819 17:40:56.648863   45795 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0819 17:40:56.648872   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0819 17:40:56.648880   45795 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0819 17:40:56.648885   45795 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0819 17:40:56.648894   45795 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0819 17:40:56.648901   45795 command_runner.go:130] > #   deprecated option "conmon".
	I0819 17:40:56.648908   45795 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0819 17:40:56.648915   45795 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0819 17:40:56.648922   45795 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0819 17:40:56.648929   45795 command_runner.go:130] > #   should be moved to the container's cgroup
	I0819 17:40:56.648935   45795 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0819 17:40:56.648943   45795 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0819 17:40:56.648949   45795 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0819 17:40:56.648956   45795 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0819 17:40:56.648959   45795 command_runner.go:130] > #
	I0819 17:40:56.648964   45795 command_runner.go:130] > # Using the seccomp notifier feature:
	I0819 17:40:56.648970   45795 command_runner.go:130] > #
	I0819 17:40:56.648976   45795 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0819 17:40:56.648985   45795 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0819 17:40:56.648991   45795 command_runner.go:130] > #
	I0819 17:40:56.648997   45795 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0819 17:40:56.649006   45795 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0819 17:40:56.649009   45795 command_runner.go:130] > #
	I0819 17:40:56.649015   45795 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0819 17:40:56.649022   45795 command_runner.go:130] > # feature.
	I0819 17:40:56.649028   45795 command_runner.go:130] > #
	I0819 17:40:56.649037   45795 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0819 17:40:56.649045   45795 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0819 17:40:56.649054   45795 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0819 17:40:56.649062   45795 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0819 17:40:56.649070   45795 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0819 17:40:56.649074   45795 command_runner.go:130] > #
	I0819 17:40:56.649079   45795 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0819 17:40:56.649094   45795 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0819 17:40:56.649101   45795 command_runner.go:130] > #
	I0819 17:40:56.649109   45795 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0819 17:40:56.649118   45795 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0819 17:40:56.649121   45795 command_runner.go:130] > #
	I0819 17:40:56.649127   45795 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0819 17:40:56.649135   45795 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0819 17:40:56.649142   45795 command_runner.go:130] > # limitation.
	I0819 17:40:56.649146   45795 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0819 17:40:56.649153   45795 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0819 17:40:56.649157   45795 command_runner.go:130] > runtime_type = "oci"
	I0819 17:40:56.649163   45795 command_runner.go:130] > runtime_root = "/run/runc"
	I0819 17:40:56.649168   45795 command_runner.go:130] > runtime_config_path = ""
	I0819 17:40:56.649175   45795 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0819 17:40:56.649179   45795 command_runner.go:130] > monitor_cgroup = "pod"
	I0819 17:40:56.649185   45795 command_runner.go:130] > monitor_exec_cgroup = ""
	I0819 17:40:56.649189   45795 command_runner.go:130] > monitor_env = [
	I0819 17:40:56.649197   45795 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0819 17:40:56.649204   45795 command_runner.go:130] > ]
	I0819 17:40:56.649209   45795 command_runner.go:130] > privileged_without_host_devices = false
	I0819 17:40:56.649219   45795 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0819 17:40:56.649227   45795 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0819 17:40:56.649233   45795 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0819 17:40:56.649243   45795 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0819 17:40:56.649252   45795 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0819 17:40:56.649260   45795 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0819 17:40:56.649273   45795 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0819 17:40:56.649283   45795 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0819 17:40:56.649288   45795 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0819 17:40:56.649297   45795 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0819 17:40:56.649305   45795 command_runner.go:130] > # Example:
	I0819 17:40:56.649309   45795 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0819 17:40:56.649313   45795 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0819 17:40:56.649317   45795 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0819 17:40:56.649322   45795 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0819 17:40:56.649325   45795 command_runner.go:130] > # cpuset = 0
	I0819 17:40:56.649334   45795 command_runner.go:130] > # cpushares = "0-1"
	I0819 17:40:56.649338   45795 command_runner.go:130] > # Where:
	I0819 17:40:56.649344   45795 command_runner.go:130] > # The workload name is workload-type.
	I0819 17:40:56.649351   45795 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0819 17:40:56.649356   45795 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0819 17:40:56.649361   45795 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0819 17:40:56.649368   45795 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0819 17:40:56.649374   45795 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0819 17:40:56.649378   45795 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0819 17:40:56.649384   45795 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0819 17:40:56.649388   45795 command_runner.go:130] > # Default value is set to true
	I0819 17:40:56.649392   45795 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0819 17:40:56.649397   45795 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0819 17:40:56.649401   45795 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0819 17:40:56.649405   45795 command_runner.go:130] > # Default value is set to 'false'
	I0819 17:40:56.649409   45795 command_runner.go:130] > # disable_hostport_mapping = false
	I0819 17:40:56.649414   45795 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0819 17:40:56.649417   45795 command_runner.go:130] > #
	I0819 17:40:56.649423   45795 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0819 17:40:56.649428   45795 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0819 17:40:56.649434   45795 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0819 17:40:56.649440   45795 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0819 17:40:56.649445   45795 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0819 17:40:56.649448   45795 command_runner.go:130] > [crio.image]
	I0819 17:40:56.649454   45795 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0819 17:40:56.649458   45795 command_runner.go:130] > # default_transport = "docker://"
	I0819 17:40:56.649463   45795 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0819 17:40:56.649468   45795 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0819 17:40:56.649472   45795 command_runner.go:130] > # global_auth_file = ""
	I0819 17:40:56.649476   45795 command_runner.go:130] > # The image used to instantiate infra containers.
	I0819 17:40:56.649481   45795 command_runner.go:130] > # This option supports live configuration reload.
	I0819 17:40:56.649487   45795 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0819 17:40:56.649493   45795 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0819 17:40:56.649501   45795 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0819 17:40:56.649506   45795 command_runner.go:130] > # This option supports live configuration reload.
	I0819 17:40:56.649513   45795 command_runner.go:130] > # pause_image_auth_file = ""
	I0819 17:40:56.649523   45795 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0819 17:40:56.649532   45795 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0819 17:40:56.649545   45795 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0819 17:40:56.649553   45795 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0819 17:40:56.649563   45795 command_runner.go:130] > # pause_command = "/pause"
	I0819 17:40:56.649569   45795 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0819 17:40:56.649578   45795 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0819 17:40:56.649586   45795 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0819 17:40:56.649595   45795 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0819 17:40:56.649600   45795 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0819 17:40:56.649609   45795 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0819 17:40:56.649615   45795 command_runner.go:130] > # pinned_images = [
	I0819 17:40:56.649622   45795 command_runner.go:130] > # ]
	I0819 17:40:56.649630   45795 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0819 17:40:56.649637   45795 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0819 17:40:56.649647   45795 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0819 17:40:56.649656   45795 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0819 17:40:56.649663   45795 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0819 17:40:56.649670   45795 command_runner.go:130] > # signature_policy = ""
	I0819 17:40:56.649675   45795 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0819 17:40:56.649684   45795 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0819 17:40:56.649691   45795 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0819 17:40:56.649699   45795 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0819 17:40:56.649707   45795 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0819 17:40:56.649714   45795 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0819 17:40:56.649723   45795 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0819 17:40:56.649731   45795 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0819 17:40:56.649738   45795 command_runner.go:130] > # changing them here.
	I0819 17:40:56.649742   45795 command_runner.go:130] > # insecure_registries = [
	I0819 17:40:56.649748   45795 command_runner.go:130] > # ]
	I0819 17:40:56.649755   45795 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0819 17:40:56.649762   45795 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0819 17:40:56.649769   45795 command_runner.go:130] > # image_volumes = "mkdir"
	I0819 17:40:56.649777   45795 command_runner.go:130] > # Temporary directory to use for storing big files
	I0819 17:40:56.649784   45795 command_runner.go:130] > # big_files_temporary_dir = ""
	I0819 17:40:56.649789   45795 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0819 17:40:56.649800   45795 command_runner.go:130] > # CNI plugins.
	I0819 17:40:56.649807   45795 command_runner.go:130] > [crio.network]
	I0819 17:40:56.649813   45795 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0819 17:40:56.649823   45795 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0819 17:40:56.649830   45795 command_runner.go:130] > # cni_default_network = ""
	I0819 17:40:56.649836   45795 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0819 17:40:56.649843   45795 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0819 17:40:56.649863   45795 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0819 17:40:56.649874   45795 command_runner.go:130] > # plugin_dirs = [
	I0819 17:40:56.649890   45795 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0819 17:40:56.649900   45795 command_runner.go:130] > # ]
	I0819 17:40:56.649910   45795 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0819 17:40:56.649918   45795 command_runner.go:130] > [crio.metrics]
	I0819 17:40:56.649923   45795 command_runner.go:130] > # Globally enable or disable metrics support.
	I0819 17:40:56.649930   45795 command_runner.go:130] > enable_metrics = true
	I0819 17:40:56.649934   45795 command_runner.go:130] > # Specify enabled metrics collectors.
	I0819 17:40:56.649945   45795 command_runner.go:130] > # Per default all metrics are enabled.
	I0819 17:40:56.649955   45795 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0819 17:40:56.649969   45795 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0819 17:40:56.649982   45795 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0819 17:40:56.649990   45795 command_runner.go:130] > # metrics_collectors = [
	I0819 17:40:56.649993   45795 command_runner.go:130] > # 	"operations",
	I0819 17:40:56.649998   45795 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0819 17:40:56.650005   45795 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0819 17:40:56.650009   45795 command_runner.go:130] > # 	"operations_errors",
	I0819 17:40:56.650014   45795 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0819 17:40:56.650018   45795 command_runner.go:130] > # 	"image_pulls_by_name",
	I0819 17:40:56.650025   45795 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0819 17:40:56.650035   45795 command_runner.go:130] > # 	"image_pulls_failures",
	I0819 17:40:56.650043   45795 command_runner.go:130] > # 	"image_pulls_successes",
	I0819 17:40:56.650054   45795 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0819 17:40:56.650065   45795 command_runner.go:130] > # 	"image_layer_reuse",
	I0819 17:40:56.650075   45795 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0819 17:40:56.650084   45795 command_runner.go:130] > # 	"containers_oom_total",
	I0819 17:40:56.650091   45795 command_runner.go:130] > # 	"containers_oom",
	I0819 17:40:56.650101   45795 command_runner.go:130] > # 	"processes_defunct",
	I0819 17:40:56.650116   45795 command_runner.go:130] > # 	"operations_total",
	I0819 17:40:56.650129   45795 command_runner.go:130] > # 	"operations_latency_seconds",
	I0819 17:40:56.650141   45795 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0819 17:40:56.650149   45795 command_runner.go:130] > # 	"operations_errors_total",
	I0819 17:40:56.650162   45795 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0819 17:40:56.650169   45795 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0819 17:40:56.650174   45795 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0819 17:40:56.650178   45795 command_runner.go:130] > # 	"image_pulls_success_total",
	I0819 17:40:56.650182   45795 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0819 17:40:56.650186   45795 command_runner.go:130] > # 	"containers_oom_count_total",
	I0819 17:40:56.650191   45795 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0819 17:40:56.650195   45795 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0819 17:40:56.650201   45795 command_runner.go:130] > # ]
	I0819 17:40:56.650207   45795 command_runner.go:130] > # The port on which the metrics server will listen.
	I0819 17:40:56.650211   45795 command_runner.go:130] > # metrics_port = 9090
	I0819 17:40:56.650216   45795 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0819 17:40:56.650225   45795 command_runner.go:130] > # metrics_socket = ""
	I0819 17:40:56.650232   45795 command_runner.go:130] > # The certificate for the secure metrics server.
	I0819 17:40:56.650238   45795 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0819 17:40:56.650247   45795 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0819 17:40:56.650251   45795 command_runner.go:130] > # certificate on any modification event.
	I0819 17:40:56.650259   45795 command_runner.go:130] > # metrics_cert = ""
	I0819 17:40:56.650264   45795 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0819 17:40:56.650271   45795 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0819 17:40:56.650276   45795 command_runner.go:130] > # metrics_key = ""
	I0819 17:40:56.650281   45795 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0819 17:40:56.650287   45795 command_runner.go:130] > [crio.tracing]
	I0819 17:40:56.650295   45795 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0819 17:40:56.650305   45795 command_runner.go:130] > # enable_tracing = false
	I0819 17:40:56.650310   45795 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0819 17:40:56.650317   45795 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0819 17:40:56.650323   45795 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0819 17:40:56.650330   45795 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0819 17:40:56.650335   45795 command_runner.go:130] > # CRI-O NRI configuration.
	I0819 17:40:56.650341   45795 command_runner.go:130] > [crio.nri]
	I0819 17:40:56.650345   45795 command_runner.go:130] > # Globally enable or disable NRI.
	I0819 17:40:56.650359   45795 command_runner.go:130] > # enable_nri = false
	I0819 17:40:56.650366   45795 command_runner.go:130] > # NRI socket to listen on.
	I0819 17:40:56.650371   45795 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0819 17:40:56.650375   45795 command_runner.go:130] > # NRI plugin directory to use.
	I0819 17:40:56.650380   45795 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0819 17:40:56.650388   45795 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0819 17:40:56.650393   45795 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0819 17:40:56.650401   45795 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0819 17:40:56.650405   45795 command_runner.go:130] > # nri_disable_connections = false
	I0819 17:40:56.650412   45795 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0819 17:40:56.650417   45795 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0819 17:40:56.650425   45795 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0819 17:40:56.650429   45795 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0819 17:40:56.650437   45795 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0819 17:40:56.650441   45795 command_runner.go:130] > [crio.stats]
	I0819 17:40:56.650449   45795 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0819 17:40:56.650454   45795 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0819 17:40:56.650461   45795 command_runner.go:130] > # stats_collection_period = 0
	I0819 17:40:56.650495   45795 command_runner.go:130] ! time="2024-08-19 17:40:56.603425312Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0819 17:40:56.650514   45795 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0819 17:40:56.650655   45795 cni.go:84] Creating CNI manager for ""
	I0819 17:40:56.650670   45795 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 17:40:56.650682   45795 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:40:56.650707   45795 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-188752 NodeName:multinode-188752 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:40:56.650836   45795 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-188752"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:40:56.650900   45795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:40:56.660985   45795 command_runner.go:130] > kubeadm
	I0819 17:40:56.661002   45795 command_runner.go:130] > kubectl
	I0819 17:40:56.661007   45795 command_runner.go:130] > kubelet
	I0819 17:40:56.661022   45795 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:40:56.661076   45795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:40:56.669985   45795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0819 17:40:56.687212   45795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:40:56.703554   45795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0819 17:40:56.720273   45795 ssh_runner.go:195] Run: grep 192.168.39.69	control-plane.minikube.internal$ /etc/hosts
	I0819 17:40:56.723648   45795 command_runner.go:130] > 192.168.39.69	control-plane.minikube.internal
	I0819 17:40:56.723757   45795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:40:56.865597   45795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:40:56.879937   45795 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752 for IP: 192.168.39.69
	I0819 17:40:56.879963   45795 certs.go:194] generating shared ca certs ...
	I0819 17:40:56.879977   45795 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:40:56.880117   45795 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:40:56.880155   45795 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:40:56.880164   45795 certs.go:256] generating profile certs ...
	I0819 17:40:56.880232   45795 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/client.key
	I0819 17:40:56.880290   45795 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/apiserver.key.a6c14ce1
	I0819 17:40:56.880325   45795 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/proxy-client.key
	I0819 17:40:56.880338   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 17:40:56.880353   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 17:40:56.880366   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 17:40:56.880377   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 17:40:56.880389   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 17:40:56.880401   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 17:40:56.880414   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 17:40:56.880425   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 17:40:56.880485   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:40:56.880515   45795 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:40:56.880523   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:40:56.880547   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:40:56.880570   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:40:56.880600   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:40:56.880636   45795 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:40:56.880661   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:40:56.880673   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem -> /usr/share/ca-certificates/17837.pem
	I0819 17:40:56.880686   45795 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> /usr/share/ca-certificates/178372.pem
	I0819 17:40:56.881249   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:40:56.904165   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:40:56.926584   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:40:56.949480   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:40:56.971252   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 17:40:56.993205   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0819 17:40:57.014749   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:40:57.035945   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/multinode-188752/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 17:40:57.057937   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:40:57.080724   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:40:57.102394   45795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:40:57.123552   45795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:40:57.138379   45795 ssh_runner.go:195] Run: openssl version
	I0819 17:40:57.143794   45795 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0819 17:40:57.143864   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:40:57.153933   45795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:40:57.157847   45795 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:40:57.157882   45795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:40:57.157922   45795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:40:57.162924   45795 command_runner.go:130] > b5213941
	I0819 17:40:57.162976   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:40:57.171396   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:40:57.182745   45795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:40:57.186895   45795 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:40:57.186979   45795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:40:57.187029   45795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:40:57.192457   45795 command_runner.go:130] > 51391683
	I0819 17:40:57.192683   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:40:57.202681   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:40:57.214422   45795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:40:57.218425   45795 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:40:57.218601   45795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:40:57.218638   45795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:40:57.223709   45795 command_runner.go:130] > 3ec20f2e
	I0819 17:40:57.224003   45795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:40:57.234441   45795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:40:57.238657   45795 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:40:57.238680   45795 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0819 17:40:57.238689   45795 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0819 17:40:57.238699   45795 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0819 17:40:57.238710   45795 command_runner.go:130] > Access: 2024-08-19 17:34:06.999962379 +0000
	I0819 17:40:57.238715   45795 command_runner.go:130] > Modify: 2024-08-19 17:34:06.999962379 +0000
	I0819 17:40:57.238722   45795 command_runner.go:130] > Change: 2024-08-19 17:34:06.999962379 +0000
	I0819 17:40:57.238727   45795 command_runner.go:130] >  Birth: 2024-08-19 17:34:06.999962379 +0000
	I0819 17:40:57.238770   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 17:40:57.243974   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.244143   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 17:40:57.249692   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.249775   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 17:40:57.255002   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.255246   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 17:40:57.260321   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.260542   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 17:40:57.269823   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.269884   45795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 17:40:57.282056   45795 command_runner.go:130] > Certificate will not expire
	I0819 17:40:57.282410   45795 kubeadm.go:392] StartCluster: {Name:multinode-188752 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-188752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:40:57.282521   45795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:40:57.282581   45795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:40:57.370072   45795 command_runner.go:130] > 81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d
	I0819 17:40:57.370122   45795 command_runner.go:130] > 1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205
	I0819 17:40:57.370130   45795 command_runner.go:130] > 2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1
	I0819 17:40:57.370137   45795 command_runner.go:130] > 176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930
	I0819 17:40:57.370143   45795 command_runner.go:130] > 3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433
	I0819 17:40:57.370148   45795 command_runner.go:130] > 1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb
	I0819 17:40:57.370156   45795 command_runner.go:130] > 25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e
	I0819 17:40:57.370163   45795 command_runner.go:130] > 37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765
	I0819 17:40:57.370184   45795 cri.go:89] found id: "81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d"
	I0819 17:40:57.370193   45795 cri.go:89] found id: "1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205"
	I0819 17:40:57.370197   45795 cri.go:89] found id: "2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1"
	I0819 17:40:57.370200   45795 cri.go:89] found id: "176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930"
	I0819 17:40:57.370202   45795 cri.go:89] found id: "3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433"
	I0819 17:40:57.370205   45795 cri.go:89] found id: "1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb"
	I0819 17:40:57.370208   45795 cri.go:89] found id: "25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e"
	I0819 17:40:57.370211   45795 cri.go:89] found id: "37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765"
	I0819 17:40:57.370215   45795 cri.go:89] found id: ""
	I0819 17:40:57.370268   45795 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.178538135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089507178516045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51289723-7642-4aa8-ad5c-5622f86a77f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.179114047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6abf44b6-a968-4295-96e0-8e60206de73d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.179190726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6abf44b6-a968-4295-96e0-8e60206de73d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.179535416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07ca491dfda88c11465c6d2e96fa17d054197175d265ccedd52a92d9442981f0,PodSandboxId:674214e5d35525e4fc3cec5b806a88cd360c1b8d1b3804152a8993d3d131033a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724089297212399068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e42d64f923ad106c5681e0337d1002718a3d7e697e086fc7275996c921a31f4,PodSandboxId:9a18756d8dd4dba6a73e73f14fc683848a448ea49fe63ae2596c96710e40c461,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724089263652342831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5767babbe0bceb68a428628336811a88b11f3d29d047c7ff6dcfb05f06a46e47,PodSandboxId:30aee9f2fdde498e409447cef0444adcd49a590f730492efcf43c862da396f02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089263460899904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f07663a86a1fa6ff64c06a3ca6e88446673e98ebeed1ef810dbe02c48c75dab,PodSandboxId:5b5043e2e12c5417cb56910f33e9970a577707e200ddd00cb94c9e78caba9cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089263450602001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8941e42317dea38472221549875240680597844704007c0a764ac708a8647893,PodSandboxId:5b311b241b4cdac3071e62b61874ef25a7efcbb8488bf6d508ef4d45f7f81e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089259656804650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:1b2d87d82d3f8a7de004cd01d22076fd7972000fc3eda41fad9af97730b0a02c,PodSandboxId:d7390fc89932e52297ea90685d68d5398b15a82a770daff3ae3dca5fa7ebb67a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089259648925652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:953f745a20681d20faf684c37d4ba7423292c88cb25c5dd0f7eaa0c8f3b29c06,PodSandboxId:00bd52dbedae85ee150a7a0cf21a370ff1773139ee68bb722ed96ecb32e3b496,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089259646156174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da55b
56059e42ae778081cc89b18fa2733c79007e2d326522c513e4d705777,PodSandboxId:232ec0a844a3bb2e35b662ac6cd6551a5d59494e61c11a5dcdaaaeee00266d9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089259597951698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:216e4c8e109638778cfd96f3abca621dacc51ee69d4de097ad14a23e73d543fc,PodSandboxId:4056fb23938f40126314f1fa1ef33af61f0b43874a65158fce1ea4be6add3074,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089257474213122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abd882b91b50e076d63907c439c187114dd89191eb5e6a69437d630089433ee,PodSandboxId:af2d8a3af05f23891bc3835bf8e12dd53eca23a880d435ed82c18e50b9a006ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724088935561145439,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d,PodSandboxId:29e22f9fa14322caa50bb0faf7356dc9d9db04e3ef599448a453971d39600d1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724088877120420896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205,PodSandboxId:566e4b7ea183c0aa9ace79fc2a858586e8e84a42a8c6fd8034cae4bde65ca873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724088877066898310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1,PodSandboxId:28f4d55043b44bf3e91ddb48e286ca7970493a85d37ce0d024a2c06fbafa9659,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724088865366766605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930,PodSandboxId:989926b39be5f89b1387400a25b4c9a16f2c85658b26008dde1b48cbdd943b4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724088861779125933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433,PodSandboxId:c2327989a19cb3143de0cfc13899547e33791d168ca26d088a4ba654f3d43725,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724088850737120984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb,PodSandboxId:180db7099269a10792b9f9113b2e3cc4e9326214089c85418b74dc20b96a0c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724088850725163427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e,PodSandboxId:910d2d20c5a51395eda2258a7dc233fe444d4adb1223e482148e445d790e67e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724088850653468186,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765,PodSandboxId:39c06d029ce8e31747c4358bad9229eadd4875359b572fad8bf23b1a9af81d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724088850592417304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6abf44b6-a968-4295-96e0-8e60206de73d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.218924231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27032d29-510a-4005-ab8e-dc90fbb08068 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.219006975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27032d29-510a-4005-ab8e-dc90fbb08068 name=/runtime.v1.RuntimeService/Version
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.220175648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=956074fb-5451-4114-83e5-e0a73878f2dc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.220747845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089507220696694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=956074fb-5451-4114-83e5-e0a73878f2dc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.221316153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ac492cb-1e3f-4a4d-b40f-fa11a0f22e97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.221393810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ac492cb-1e3f-4a4d-b40f-fa11a0f22e97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.221783170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07ca491dfda88c11465c6d2e96fa17d054197175d265ccedd52a92d9442981f0,PodSandboxId:674214e5d35525e4fc3cec5b806a88cd360c1b8d1b3804152a8993d3d131033a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724089297212399068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e42d64f923ad106c5681e0337d1002718a3d7e697e086fc7275996c921a31f4,PodSandboxId:9a18756d8dd4dba6a73e73f14fc683848a448ea49fe63ae2596c96710e40c461,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724089263652342831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5767babbe0bceb68a428628336811a88b11f3d29d047c7ff6dcfb05f06a46e47,PodSandboxId:30aee9f2fdde498e409447cef0444adcd49a590f730492efcf43c862da396f02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089263460899904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f07663a86a1fa6ff64c06a3ca6e88446673e98ebeed1ef810dbe02c48c75dab,PodSandboxId:5b5043e2e12c5417cb56910f33e9970a577707e200ddd00cb94c9e78caba9cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089263450602001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8941e42317dea38472221549875240680597844704007c0a764ac708a8647893,PodSandboxId:5b311b241b4cdac3071e62b61874ef25a7efcbb8488bf6d508ef4d45f7f81e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089259656804650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:1b2d87d82d3f8a7de004cd01d22076fd7972000fc3eda41fad9af97730b0a02c,PodSandboxId:d7390fc89932e52297ea90685d68d5398b15a82a770daff3ae3dca5fa7ebb67a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089259648925652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:953f745a20681d20faf684c37d4ba7423292c88cb25c5dd0f7eaa0c8f3b29c06,PodSandboxId:00bd52dbedae85ee150a7a0cf21a370ff1773139ee68bb722ed96ecb32e3b496,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089259646156174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da55b
56059e42ae778081cc89b18fa2733c79007e2d326522c513e4d705777,PodSandboxId:232ec0a844a3bb2e35b662ac6cd6551a5d59494e61c11a5dcdaaaeee00266d9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089259597951698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:216e4c8e109638778cfd96f3abca621dacc51ee69d4de097ad14a23e73d543fc,PodSandboxId:4056fb23938f40126314f1fa1ef33af61f0b43874a65158fce1ea4be6add3074,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089257474213122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abd882b91b50e076d63907c439c187114dd89191eb5e6a69437d630089433ee,PodSandboxId:af2d8a3af05f23891bc3835bf8e12dd53eca23a880d435ed82c18e50b9a006ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724088935561145439,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d,PodSandboxId:29e22f9fa14322caa50bb0faf7356dc9d9db04e3ef599448a453971d39600d1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724088877120420896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205,PodSandboxId:566e4b7ea183c0aa9ace79fc2a858586e8e84a42a8c6fd8034cae4bde65ca873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724088877066898310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1,PodSandboxId:28f4d55043b44bf3e91ddb48e286ca7970493a85d37ce0d024a2c06fbafa9659,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724088865366766605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930,PodSandboxId:989926b39be5f89b1387400a25b4c9a16f2c85658b26008dde1b48cbdd943b4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724088861779125933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433,PodSandboxId:c2327989a19cb3143de0cfc13899547e33791d168ca26d088a4ba654f3d43725,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724088850737120984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb,PodSandboxId:180db7099269a10792b9f9113b2e3cc4e9326214089c85418b74dc20b96a0c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724088850725163427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e,PodSandboxId:910d2d20c5a51395eda2258a7dc233fe444d4adb1223e482148e445d790e67e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724088850653468186,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765,PodSandboxId:39c06d029ce8e31747c4358bad9229eadd4875359b572fad8bf23b1a9af81d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724088850592417304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ac492cb-1e3f-4a4d-b40f-fa11a0f22e97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.262554091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2cab6561-a577-4b01-b187-183b8de0dd6b name=/runtime.v1.RuntimeService/Version
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.262679659Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2cab6561-a577-4b01-b187-183b8de0dd6b name=/runtime.v1.RuntimeService/Version
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.263692545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76e81f62-90b1-42a7-a127-73c5144ebab1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.264123296Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089507264102012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76e81f62-90b1-42a7-a127-73c5144ebab1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.264552133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4361644-6996-4b59-b7c8-b2ac522c8276 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.264653011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4361644-6996-4b59-b7c8-b2ac522c8276 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.267245704Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07ca491dfda88c11465c6d2e96fa17d054197175d265ccedd52a92d9442981f0,PodSandboxId:674214e5d35525e4fc3cec5b806a88cd360c1b8d1b3804152a8993d3d131033a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724089297212399068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e42d64f923ad106c5681e0337d1002718a3d7e697e086fc7275996c921a31f4,PodSandboxId:9a18756d8dd4dba6a73e73f14fc683848a448ea49fe63ae2596c96710e40c461,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724089263652342831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5767babbe0bceb68a428628336811a88b11f3d29d047c7ff6dcfb05f06a46e47,PodSandboxId:30aee9f2fdde498e409447cef0444adcd49a590f730492efcf43c862da396f02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089263460899904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f07663a86a1fa6ff64c06a3ca6e88446673e98ebeed1ef810dbe02c48c75dab,PodSandboxId:5b5043e2e12c5417cb56910f33e9970a577707e200ddd00cb94c9e78caba9cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089263450602001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8941e42317dea38472221549875240680597844704007c0a764ac708a8647893,PodSandboxId:5b311b241b4cdac3071e62b61874ef25a7efcbb8488bf6d508ef4d45f7f81e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089259656804650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:1b2d87d82d3f8a7de004cd01d22076fd7972000fc3eda41fad9af97730b0a02c,PodSandboxId:d7390fc89932e52297ea90685d68d5398b15a82a770daff3ae3dca5fa7ebb67a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089259648925652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:953f745a20681d20faf684c37d4ba7423292c88cb25c5dd0f7eaa0c8f3b29c06,PodSandboxId:00bd52dbedae85ee150a7a0cf21a370ff1773139ee68bb722ed96ecb32e3b496,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089259646156174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da55b
56059e42ae778081cc89b18fa2733c79007e2d326522c513e4d705777,PodSandboxId:232ec0a844a3bb2e35b662ac6cd6551a5d59494e61c11a5dcdaaaeee00266d9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089259597951698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:216e4c8e109638778cfd96f3abca621dacc51ee69d4de097ad14a23e73d543fc,PodSandboxId:4056fb23938f40126314f1fa1ef33af61f0b43874a65158fce1ea4be6add3074,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089257474213122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abd882b91b50e076d63907c439c187114dd89191eb5e6a69437d630089433ee,PodSandboxId:af2d8a3af05f23891bc3835bf8e12dd53eca23a880d435ed82c18e50b9a006ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724088935561145439,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d,PodSandboxId:29e22f9fa14322caa50bb0faf7356dc9d9db04e3ef599448a453971d39600d1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724088877120420896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205,PodSandboxId:566e4b7ea183c0aa9ace79fc2a858586e8e84a42a8c6fd8034cae4bde65ca873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724088877066898310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1,PodSandboxId:28f4d55043b44bf3e91ddb48e286ca7970493a85d37ce0d024a2c06fbafa9659,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724088865366766605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930,PodSandboxId:989926b39be5f89b1387400a25b4c9a16f2c85658b26008dde1b48cbdd943b4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724088861779125933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433,PodSandboxId:c2327989a19cb3143de0cfc13899547e33791d168ca26d088a4ba654f3d43725,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724088850737120984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb,PodSandboxId:180db7099269a10792b9f9113b2e3cc4e9326214089c85418b74dc20b96a0c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724088850725163427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e,PodSandboxId:910d2d20c5a51395eda2258a7dc233fe444d4adb1223e482148e445d790e67e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724088850653468186,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765,PodSandboxId:39c06d029ce8e31747c4358bad9229eadd4875359b572fad8bf23b1a9af81d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724088850592417304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4361644-6996-4b59-b7c8-b2ac522c8276 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.310361781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef6bba60-3943-4d13-a74f-e5b55841a08a name=/runtime.v1.RuntimeService/Version
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.310437047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef6bba60-3943-4d13-a74f-e5b55841a08a name=/runtime.v1.RuntimeService/Version
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.311564129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7543685a-4e97-42ec-ba12-1fdac65fa423 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.312016208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089507311993163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7543685a-4e97-42ec-ba12-1fdac65fa423 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.312517796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78ce06d6-f533-4073-9c01-fc2aa5ef8e61 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.312612859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78ce06d6-f533-4073-9c01-fc2aa5ef8e61 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 17:45:07 multinode-188752 crio[2747]: time="2024-08-19 17:45:07.312979257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07ca491dfda88c11465c6d2e96fa17d054197175d265ccedd52a92d9442981f0,PodSandboxId:674214e5d35525e4fc3cec5b806a88cd360c1b8d1b3804152a8993d3d131033a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724089297212399068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e42d64f923ad106c5681e0337d1002718a3d7e697e086fc7275996c921a31f4,PodSandboxId:9a18756d8dd4dba6a73e73f14fc683848a448ea49fe63ae2596c96710e40c461,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724089263652342831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5767babbe0bceb68a428628336811a88b11f3d29d047c7ff6dcfb05f06a46e47,PodSandboxId:30aee9f2fdde498e409447cef0444adcd49a590f730492efcf43c862da396f02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724089263460899904,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f07663a86a1fa6ff64c06a3ca6e88446673e98ebeed1ef810dbe02c48c75dab,PodSandboxId:5b5043e2e12c5417cb56910f33e9970a577707e200ddd00cb94c9e78caba9cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724089263450602001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8941e42317dea38472221549875240680597844704007c0a764ac708a8647893,PodSandboxId:5b311b241b4cdac3071e62b61874ef25a7efcbb8488bf6d508ef4d45f7f81e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724089259656804650,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:1b2d87d82d3f8a7de004cd01d22076fd7972000fc3eda41fad9af97730b0a02c,PodSandboxId:d7390fc89932e52297ea90685d68d5398b15a82a770daff3ae3dca5fa7ebb67a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724089259648925652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:953f745a20681d20faf684c37d4ba7423292c88cb25c5dd0f7eaa0c8f3b29c06,PodSandboxId:00bd52dbedae85ee150a7a0cf21a370ff1773139ee68bb722ed96ecb32e3b496,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724089259646156174,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da55b
56059e42ae778081cc89b18fa2733c79007e2d326522c513e4d705777,PodSandboxId:232ec0a844a3bb2e35b662ac6cd6551a5d59494e61c11a5dcdaaaeee00266d9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724089259597951698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:216e4c8e109638778cfd96f3abca621dacc51ee69d4de097ad14a23e73d543fc,PodSandboxId:4056fb23938f40126314f1fa1ef33af61f0b43874a65158fce1ea4be6add3074,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724089257474213122,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abd882b91b50e076d63907c439c187114dd89191eb5e6a69437d630089433ee,PodSandboxId:af2d8a3af05f23891bc3835bf8e12dd53eca23a880d435ed82c18e50b9a006ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724088935561145439,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vxmhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6d4a109-9a98-4893-98d4-9beaa1dc3d9c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d,PodSandboxId:29e22f9fa14322caa50bb0faf7356dc9d9db04e3ef599448a453971d39600d1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724088877120420896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mnbvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d34c1c2-a893-448a-9259-02f940f69d52,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f84fa074cce8a1f0974297f59a323d01b59509728fc5b69a5ec461dc29b4205,PodSandboxId:566e4b7ea183c0aa9ace79fc2a858586e8e84a42a8c6fd8034cae4bde65ca873,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724088877066898310,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: d5f7287f-0569-4d7c-8c7e-401d9f03627f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1,PodSandboxId:28f4d55043b44bf3e91ddb48e286ca7970493a85d37ce0d024a2c06fbafa9659,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724088865366766605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ncksr,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: a6aff881-2f0f-4bbb-b941-c4f3e42f0161,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930,PodSandboxId:989926b39be5f89b1387400a25b4c9a16f2c85658b26008dde1b48cbdd943b4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724088861779125933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56fnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d496353b-fa08-4d15-a00f-a2b0bbf3dbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433,PodSandboxId:c2327989a19cb3143de0cfc13899547e33791d168ca26d088a4ba654f3d43725,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724088850737120984,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f4404bfeac0ec4ed2d763a6e30dc30b
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb,PodSandboxId:180db7099269a10792b9f9113b2e3cc4e9326214089c85418b74dc20b96a0c8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724088850725163427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 299ef527f5a1e3e9ac04cf5f1e932e7c,},Annotations:
map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e,PodSandboxId:910d2d20c5a51395eda2258a7dc233fe444d4adb1223e482148e445d790e67e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724088850653468186,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c7e473af4c2d8c4e7aa4f8d84e8e5b,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765,PodSandboxId:39c06d029ce8e31747c4358bad9229eadd4875359b572fad8bf23b1a9af81d91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724088850592417304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-188752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26aa1574976d0c7f3da386dc65ec0135,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78ce06d6-f533-4073-9c01-fc2aa5ef8e61 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	07ca491dfda88       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   674214e5d3552       busybox-7dff88458-vxmhm
	2e42d64f923ad       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   9a18756d8dd4d       kindnet-ncksr
	5767babbe0bce       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   30aee9f2fdde4       kube-proxy-56fnf
	0f07663a86a1f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   5b5043e2e12c5       storage-provisioner
	8941e42317dea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   5b311b241b4cd       etcd-multinode-188752
	1b2d87d82d3f8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   d7390fc89932e       kube-scheduler-multinode-188752
	953f745a20681       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   00bd52dbedae8       kube-apiserver-multinode-188752
	83da55b56059e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   232ec0a844a3b       kube-controller-manager-multinode-188752
	216e4c8e10963       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   4056fb23938f4       coredns-6f6b679f8f-mnbvf
	1abd882b91b50       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   af2d8a3af05f2       busybox-7dff88458-vxmhm
	81a9d4f57a424       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   29e22f9fa1432       coredns-6f6b679f8f-mnbvf
	1f84fa074cce8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   566e4b7ea183c       storage-provisioner
	2742257ec8503       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   28f4d55043b44       kindnet-ncksr
	176f9fa0d86f6       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   989926b39be5f       kube-proxy-56fnf
	3bee0cdeb76b7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   c2327989a19cb       etcd-multinode-188752
	1ba01f8ae738a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   180db7099269a       kube-scheduler-multinode-188752
	25d4ed3bd6626       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   910d2d20c5a51       kube-controller-manager-multinode-188752
	37d1d2de67baa       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   39c06d029ce8e       kube-apiserver-multinode-188752
	
	
	==> coredns [216e4c8e109638778cfd96f3abca621dacc51ee69d4de097ad14a23e73d543fc] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38720 - 55441 "HINFO IN 5343997893510302207.9165578157836977038. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018944069s
	
	
	==> coredns [81a9d4f57a4240aa043a716c2d186fc0064e705bc9a10f0d66a859c972b05e9d] <==
	[INFO] 10.244.0.3:41851 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001746508s
	[INFO] 10.244.0.3:42665 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000057764s
	[INFO] 10.244.0.3:42099 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000025724s
	[INFO] 10.244.0.3:38323 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001091561s
	[INFO] 10.244.0.3:53058 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061165s
	[INFO] 10.244.0.3:42543 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043406s
	[INFO] 10.244.0.3:46403 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034173s
	[INFO] 10.244.1.2:57075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115734s
	[INFO] 10.244.1.2:34531 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147361s
	[INFO] 10.244.1.2:57457 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089663s
	[INFO] 10.244.1.2:40116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008145s
	[INFO] 10.244.0.3:50771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092382s
	[INFO] 10.244.0.3:50393 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113357s
	[INFO] 10.244.0.3:41834 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042065s
	[INFO] 10.244.0.3:36633 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040365s
	[INFO] 10.244.1.2:32801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134907s
	[INFO] 10.244.1.2:37751 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000157753s
	[INFO] 10.244.1.2:54066 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116598s
	[INFO] 10.244.1.2:39363 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087178s
	[INFO] 10.244.0.3:35037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118078s
	[INFO] 10.244.0.3:35544 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073756s
	[INFO] 10.244.0.3:48081 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068742s
	[INFO] 10.244.0.3:39718 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000050659s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-188752
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-188752
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=multinode-188752
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_34_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:34:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-188752
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:45:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:41:02 +0000   Mon, 19 Aug 2024 17:34:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:41:02 +0000   Mon, 19 Aug 2024 17:34:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:41:02 +0000   Mon, 19 Aug 2024 17:34:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:41:02 +0000   Mon, 19 Aug 2024 17:34:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    multinode-188752
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 724ce6a54c4c477cb4868dea45e6dda4
	  System UUID:                724ce6a5-4c4c-477c-b486-8dea45e6dda4
	  Boot ID:                    606b75a4-7cc0-4e88-b238-d5c7997ed47c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vxmhm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  kube-system                 coredns-6f6b679f8f-mnbvf                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-188752                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-ncksr                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-188752             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-188752    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-56fnf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-188752             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-188752 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-188752 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-188752 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-188752 event: Registered Node multinode-188752 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-188752 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-188752 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-188752 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-188752 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node multinode-188752 event: Registered Node multinode-188752 in Controller
	
	
	Name:               multinode-188752-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-188752-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=multinode-188752
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T17_41_43_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:41:43 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-188752-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:42:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 17:42:13 +0000   Mon, 19 Aug 2024 17:43:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 17:42:13 +0000   Mon, 19 Aug 2024 17:43:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 17:42:13 +0000   Mon, 19 Aug 2024 17:43:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 17:42:13 +0000   Mon, 19 Aug 2024 17:43:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    multinode-188752-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d66ec704151c48f9a62b2041a5b6525c
	  System UUID:                d66ec704-151c-48f9-a62b-2041a5b6525c
	  Boot ID:                    79179e9c-db29-40c4-97f1-d50f7fc8184b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7z224    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kindnet-4s8lm              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m57s
	  kube-system                 kube-proxy-svsc7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m58s (x2 over 9m58s)  kubelet          Node multinode-188752-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m58s (x2 over 9m58s)  kubelet          Node multinode-188752-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m58s (x2 over 9m58s)  kubelet          Node multinode-188752-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m37s                  kubelet          Node multinode-188752-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-188752-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-188752-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-188752-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-188752-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-188752-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.054968] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069178] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.191772] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.118720] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.264512] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.837018] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.611150] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.061918] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.483059] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.078975] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.104746] systemd-fstab-generator[1323]: Ignoring "noauto" option for root device
	[  +0.123578] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.070599] kauditd_printk_skb: 58 callbacks suppressed
	[Aug19 17:35] kauditd_printk_skb: 14 callbacks suppressed
	[Aug19 17:40] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.148300] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +0.159805] systemd-fstab-generator[2692]: Ignoring "noauto" option for root device
	[  +0.139742] systemd-fstab-generator[2704]: Ignoring "noauto" option for root device
	[  +0.260045] systemd-fstab-generator[2732]: Ignoring "noauto" option for root device
	[  +1.977523] systemd-fstab-generator[2829]: Ignoring "noauto" option for root device
	[  +1.994487] systemd-fstab-generator[3056]: Ignoring "noauto" option for root device
	[  +0.728166] kauditd_printk_skb: 154 callbacks suppressed
	[Aug19 17:41] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.505691] systemd-fstab-generator[3788]: Ignoring "noauto" option for root device
	[ +18.453629] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [3bee0cdeb76b7f3397d48d6637b39b3243dd3f2bf9d1ec23cde83a565d571433] <==
	{"level":"info","ts":"2024-08-19T17:34:12.163855Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:34:12.180731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.69:2379"}
	{"level":"warn","ts":"2024-08-19T17:35:09.886907Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.850146ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:35:09.887190Z","caller":"traceutil/trace.go:171","msg":"trace[953954464] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:450; }","duration":"152.18195ms","start":"2024-08-19T17:35:09.734997Z","end":"2024-08-19T17:35:09.887178Z","steps":["trace[953954464] 'range keys from in-memory index tree'  (duration: 151.838482ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:35:09.887085Z","caller":"traceutil/trace.go:171","msg":"trace[1185464655] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"206.126437ms","start":"2024-08-19T17:35:09.680946Z","end":"2024-08-19T17:35:09.887072Z","steps":["trace[1185464655] 'process raft request'  (duration: 204.705459ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:35:13.096387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.035586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:35:13.096437Z","caller":"traceutil/trace.go:171","msg":"trace[182870289] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:482; }","duration":"140.118283ms","start":"2024-08-19T17:35:12.956308Z","end":"2024-08-19T17:35:13.096426Z","steps":["trace[182870289] 'range keys from in-memory index tree'  (duration: 139.9871ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:36:09.031330Z","caller":"traceutil/trace.go:171","msg":"trace[100700449] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:621; }","duration":"131.902475ms","start":"2024-08-19T17:36:08.899394Z","end":"2024-08-19T17:36:09.031296Z","steps":["trace[100700449] 'read index received'  (duration: 128.236213ms)","trace[100700449] 'applied index is now lower than readState.Index'  (duration: 3.664681ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:36:09.031388Z","caller":"traceutil/trace.go:171","msg":"trace[292842797] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"132.007782ms","start":"2024-08-19T17:36:08.899359Z","end":"2024-08-19T17:36:09.031367Z","steps":["trace[292842797] 'process raft request'  (duration: 128.310256ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:36:09.031781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.325304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-188752-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:36:09.031836Z","caller":"traceutil/trace.go:171","msg":"trace[763824854] range","detail":"{range_begin:/registry/minions/multinode-188752-m03; range_end:; response_count:0; response_revision:590; }","duration":"132.438192ms","start":"2024-08-19T17:36:08.899390Z","end":"2024-08-19T17:36:09.031828Z","steps":["trace[763824854] 'agreement among raft nodes before linearized reading'  (duration: 132.02645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:36:10.380248Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.837851ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10492139648658991075 > lease_revoke:<id:119b916bb41c675d>","response":"size:28"}
	{"level":"info","ts":"2024-08-19T17:36:10.717518Z","caller":"traceutil/trace.go:171","msg":"trace[557239600] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"178.166794ms","start":"2024-08-19T17:36:10.539338Z","end":"2024-08-19T17:36:10.717505Z","steps":["trace[557239600] 'process raft request'  (duration: 178.018137ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:37:06.146640Z","caller":"traceutil/trace.go:171","msg":"trace[1195799428] transaction","detail":"{read_only:false; response_revision:723; number_of_response:1; }","duration":"113.310467ms","start":"2024-08-19T17:37:06.033253Z","end":"2024-08-19T17:37:06.146563Z","steps":["trace[1195799428] 'process raft request'  (duration: 113.201333ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:39:22.836913Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-19T17:39:22.837062Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-188752","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	{"level":"warn","ts":"2024-08-19T17:39:22.837192Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T17:39:22.837304Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/08/19 17:39:22 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-19T17:39:22.885164Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-19T17:39:22.885203Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-19T17:39:22.885288Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9199217ddd03919b","current-leader-member-id":"9199217ddd03919b"}
	{"level":"info","ts":"2024-08-19T17:39:22.887796Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-08-19T17:39:22.887963Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-08-19T17:39:22.887985Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-188752","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	
	
	==> etcd [8941e42317dea38472221549875240680597844704007c0a764ac708a8647893] <==
	{"level":"info","ts":"2024-08-19T17:41:00.064693Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","added-peer-id":"9199217ddd03919b","added-peer-peer-urls":["https://192.168.39.69:2380"]}
	{"level":"info","ts":"2024-08-19T17:41:00.064914Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T17:41:00.064994Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T17:41:00.068227Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:41:00.068928Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:41:00.068940Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-19T17:41:00.072918Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-08-19T17:41:00.072944Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-08-19T17:41:01.297260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T17:41:01.297313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T17:41:01.297355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgPreVoteResp from 9199217ddd03919b at term 2"}
	{"level":"info","ts":"2024-08-19T17:41:01.297375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T17:41:01.297383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgVoteResp from 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2024-08-19T17:41:01.297397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became leader at term 3"}
	{"level":"info","ts":"2024-08-19T17:41:01.297406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9199217ddd03919b elected leader 9199217ddd03919b at term 3"}
	{"level":"info","ts":"2024-08-19T17:41:01.302545Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9199217ddd03919b","local-member-attributes":"{Name:multinode-188752 ClientURLs:[https://192.168.39.69:2379]}","request-path":"/0/members/9199217ddd03919b/attributes","cluster-id":"6c21f62219c1156b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T17:41:01.302770Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:41:01.302853Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:41:01.303443Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T17:41:01.303478Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T17:41:01.304416Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:41:01.304416Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:41:01.306412Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T17:41:01.306636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.69:2379"}
	{"level":"info","ts":"2024-08-19T17:42:29.598616Z","caller":"traceutil/trace.go:171","msg":"trace[538643094] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"152.35683ms","start":"2024-08-19T17:42:29.446170Z","end":"2024-08-19T17:42:29.598526Z","steps":["trace[538643094] 'process raft request'  (duration: 152.199839ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:45:07 up 11 min,  0 users,  load average: 0.06, 0.20, 0.14
	Linux multinode-188752 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2742257ec850362ee27cd5068c54001d3ad243ae0a370538f32aa032108d0bb1] <==
	I0819 17:38:36.198786       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	I0819 17:38:46.199373       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:38:46.199430       1 main.go:299] handling current node
	I0819 17:38:46.199452       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:38:46.199458       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:38:46.199698       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:38:46.199707       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	I0819 17:38:56.193381       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:38:56.193436       1 main.go:299] handling current node
	I0819 17:38:56.193457       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:38:56.193464       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:38:56.193657       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:38:56.193681       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	I0819 17:39:06.202245       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:39:06.202281       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:39:06.202411       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:39:06.202431       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	I0819 17:39:06.202520       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:39:06.202539       1 main.go:299] handling current node
	I0819 17:39:16.198770       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:39:16.198833       1 main.go:299] handling current node
	I0819 17:39:16.198848       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:39:16.198854       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:39:16.198981       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0819 17:39:16.198987       1 main.go:322] Node multinode-188752-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [2e42d64f923ad106c5681e0337d1002718a3d7e697e086fc7275996c921a31f4] <==
	I0819 17:44:04.490774       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:44:14.491928       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:44:14.492129       1 main.go:299] handling current node
	I0819 17:44:14.492158       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:44:14.492176       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:44:24.499101       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:44:24.499167       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:44:24.499346       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:44:24.499368       1 main.go:299] handling current node
	I0819 17:44:34.497036       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:44:34.497162       1 main.go:299] handling current node
	I0819 17:44:34.497202       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:44:34.497224       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:44:44.497415       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:44:44.497551       1 main.go:299] handling current node
	I0819 17:44:44.497647       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:44:44.497789       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:44:54.499990       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:44:54.500019       1 main.go:299] handling current node
	I0819 17:44:54.500033       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:44:54.500037       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	I0819 17:45:04.491563       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0819 17:45:04.491703       1 main.go:299] handling current node
	I0819 17:45:04.491735       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0819 17:45:04.491753       1 main.go:322] Node multinode-188752-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [37d1d2de67baa9a16a4db9578b26b1311c66a8dc142008cdc972da15dcc9d765] <==
	I0819 17:39:22.846448       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0819 17:39:22.846473       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0819 17:39:22.846500       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0819 17:39:22.846525       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0819 17:39:22.846544       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0819 17:39:22.846557       1 establishing_controller.go:92] Shutting down EstablishingController
	I0819 17:39:22.846634       1 naming_controller.go:305] Shutting down NamingConditionController
	I0819 17:39:22.846657       1 controller.go:170] Shutting down OpenAPI controller
	I0819 17:39:22.846714       1 autoregister_controller.go:168] Shutting down autoregister controller
	I0819 17:39:22.846741       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0819 17:39:22.846765       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0819 17:39:22.846798       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0819 17:39:22.848954       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 17:39:22.849210       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0819 17:39:22.853056       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.853161       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.853249       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.853337       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.853552       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.853932       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 17:39:22.854070       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 17:39:22.854234       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	W0819 17:39:22.854328       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 17:39:22.854400       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0819 17:39:22.856486       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-apiserver [953f745a20681d20faf684c37d4ba7423292c88cb25c5dd0f7eaa0c8f3b29c06] <==
	I0819 17:41:02.567192       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 17:41:02.576295       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 17:41:02.577719       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 17:41:02.578184       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 17:41:02.578230       1 policy_source.go:224] refreshing policies
	I0819 17:41:02.578504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 17:41:02.578642       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 17:41:02.578900       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 17:41:02.591632       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 17:41:02.594254       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 17:41:02.594282       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 17:41:02.610230       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 17:41:02.617972       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 17:41:02.623537       1 aggregator.go:171] initial CRD sync complete...
	I0819 17:41:02.623598       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 17:41:02.623621       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 17:41:02.623633       1 cache.go:39] Caches are synced for autoregister controller
	I0819 17:41:03.476315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 17:41:04.495794       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 17:41:04.648364       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 17:41:04.667739       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 17:41:04.746294       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 17:41:04.752812       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 17:41:06.077931       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 17:41:06.178869       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [25d4ed3bd662621cf62446570fdb5be67551f34626c8788c5fff91db7457776e] <==
	I0819 17:36:57.013196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:36:57.248835       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:36:57.249719       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:36:58.097181       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-188752-m03\" does not exist"
	I0819 17:36:58.097236       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:36:58.115174       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-188752-m03" podCIDRs=["10.244.3.0/24"]
	I0819 17:36:58.115209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:36:58.115252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:36:58.233642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:36:58.550839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:00.537396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:08.375851       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:17.649409       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:37:17.649440       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:17.662476       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:20.477870       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:37:55.493167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:37:55.493269       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m03"
	I0819 17:37:55.509999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:37:55.544329       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.651337ms"
	I0819 17:37:55.545021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.432µs"
	I0819 17:38:00.546841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:38:00.561487       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:38:00.623160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:38:10.697112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	
	
	==> kube-controller-manager [83da55b56059e42ae778081cc89b18fa2733c79007e2d326522c513e4d705777] <==
	E0819 17:42:21.585454       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-188752-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-188752-m03"
	E0819 17:42:21.585516       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-188752-m03': failed to patch node CIDR: Node \"multinode-188752-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0819 17:42:21.585542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:21.591043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:21.813435       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:22.143635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:25.984423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:31.888106       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:41.144776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:41.144944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:42:41.154138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:45.726228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:45.740731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:45.906040       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:42:46.183411       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-188752-m02"
	I0819 17:42:46.183840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m03"
	I0819 17:43:25.923353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:43:25.943424       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:43:25.965549       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.895447ms"
	I0819 17:43:25.966275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.448µs"
	I0819 17:43:30.993555       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-188752-m02"
	I0819 17:43:45.837325       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-dhm77"
	I0819 17:43:45.863929       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-dhm77"
	I0819 17:43:45.864021       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kqw6z"
	I0819 17:43:45.884094       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kqw6z"
	
	
	==> kube-proxy [176f9fa0d86f6ebc87a086de090a0b8092b1b988508af28b8af44d5f71f7b930] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:34:22.138959       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:34:22.151504       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E0819 17:34:22.151611       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:34:22.191752       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:34:22.191821       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:34:22.191990       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:34:22.194204       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:34:22.194482       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:34:22.194506       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:34:22.196447       1 config.go:197] "Starting service config controller"
	I0819 17:34:22.196724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:34:22.196794       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:34:22.196812       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:34:22.197292       1 config.go:326] "Starting node config controller"
	I0819 17:34:22.197314       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:34:22.297701       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:34:22.297748       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:34:22.297808       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [5767babbe0bceb68a428628336811a88b11f3d29d047c7ff6dcfb05f06a46e47] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 17:41:03.749149       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 17:41:03.758767       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	E0819 17:41:03.758844       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:41:03.821019       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 17:41:03.821083       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 17:41:03.821112       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:41:03.827403       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:41:03.827714       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:41:03.827737       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:41:03.832661       1 config.go:197] "Starting service config controller"
	I0819 17:41:03.832747       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:41:03.832828       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:41:03.832843       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:41:03.834401       1 config.go:326] "Starting node config controller"
	I0819 17:41:03.834422       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:41:03.933693       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:41:03.933725       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:41:03.935063       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1b2d87d82d3f8a7de004cd01d22076fd7972000fc3eda41fad9af97730b0a02c] <==
	I0819 17:41:00.349431       1 serving.go:386] Generated self-signed cert in-memory
	W0819 17:41:02.520222       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 17:41:02.520356       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 17:41:02.520393       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 17:41:02.520463       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 17:41:02.606912       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 17:41:02.606950       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:41:02.616534       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 17:41:02.616709       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 17:41:02.616763       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 17:41:02.616788       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 17:41:02.716946       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [1ba01f8ae738a1cba419f9215a09a3d1d4f91ff6f8fe523f004c6d58e13fa7cb] <==
	E0819 17:34:13.510038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:13.510021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:34:13.510157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:13.510171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:34:13.510355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:13.510124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:34:13.510453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:13.518445       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:34:13.518613       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 17:34:14.404197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 17:34:14.404394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.486155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:34:14.486273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.565765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:34:14.565906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.590521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:34:14.590729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.602947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:34:14.603027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.648044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:34:14.648433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:34:14.724154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:34:14.724268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 17:34:15.094549       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 17:39:22.826475       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 19 17:43:49 multinode-188752 kubelet[3063]: E0819 17:43:49.104258    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089429103939512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:43:59 multinode-188752 kubelet[3063]: E0819 17:43:59.022693    3063 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:43:59 multinode-188752 kubelet[3063]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:43:59 multinode-188752 kubelet[3063]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:43:59 multinode-188752 kubelet[3063]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:43:59 multinode-188752 kubelet[3063]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:43:59 multinode-188752 kubelet[3063]: E0819 17:43:59.106828    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089439105871521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:43:59 multinode-188752 kubelet[3063]: E0819 17:43:59.107349    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089439105871521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:09 multinode-188752 kubelet[3063]: E0819 17:44:09.109460    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089449109143860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:09 multinode-188752 kubelet[3063]: E0819 17:44:09.109503    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089449109143860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:19 multinode-188752 kubelet[3063]: E0819 17:44:19.110472    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089459110243024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:19 multinode-188752 kubelet[3063]: E0819 17:44:19.110508    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089459110243024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:29 multinode-188752 kubelet[3063]: E0819 17:44:29.111632    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089469111284804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:29 multinode-188752 kubelet[3063]: E0819 17:44:29.111671    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089469111284804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:39 multinode-188752 kubelet[3063]: E0819 17:44:39.113280    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089479113034320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:39 multinode-188752 kubelet[3063]: E0819 17:44:39.113316    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089479113034320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:49 multinode-188752 kubelet[3063]: E0819 17:44:49.115557    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089489114914944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:49 multinode-188752 kubelet[3063]: E0819 17:44:49.115631    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089489114914944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:59 multinode-188752 kubelet[3063]: E0819 17:44:59.021477    3063 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 17:44:59 multinode-188752 kubelet[3063]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 17:44:59 multinode-188752 kubelet[3063]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 17:44:59 multinode-188752 kubelet[3063]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 17:44:59 multinode-188752 kubelet[3063]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 17:44:59 multinode-188752 kubelet[3063]: E0819 17:44:59.118434    3063 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089499117939387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 17:44:59 multinode-188752 kubelet[3063]: E0819 17:44:59.118835    3063 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724089499117939387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 17:45:06.929495   47712 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19478-10654/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-188752 -n multinode-188752
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-188752 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.12s)

                                                
                                    
x
+
TestPreload (272.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-459056 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0819 17:50:21.263059   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-459056 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m9.575743343s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-459056 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-459056 image pull gcr.io/k8s-minikube/busybox: (2.835953304s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-459056
E0819 17:53:15.961418   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-459056: exit status 82 (2m0.471451105s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-459056"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-459056 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-19 17:53:34.244011433 +0000 UTC m=+3675.948179323
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-459056 -n test-preload-459056
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-459056 -n test-preload-459056: exit status 3 (18.444401139s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 17:53:52.685137   50645 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	E0819 17:53:52.685156   50645 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-459056" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-459056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-459056
--- FAIL: TestPreload (272.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (379.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-415209 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-415209 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m29.462963191s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-415209] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-415209" primary control-plane node in "kubernetes-upgrade-415209" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:55:47.475594   51735 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:55:47.475785   51735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:55:47.475809   51735 out.go:358] Setting ErrFile to fd 2...
	I0819 17:55:47.475821   51735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:55:47.476081   51735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:55:47.476846   51735 out.go:352] Setting JSON to false
	I0819 17:55:47.477982   51735 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5892,"bootTime":1724084255,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:55:47.478059   51735 start.go:139] virtualization: kvm guest
	I0819 17:55:47.480152   51735 out.go:177] * [kubernetes-upgrade-415209] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:55:47.482189   51735 notify.go:220] Checking for updates...
	I0819 17:55:47.483628   51735 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:55:47.484984   51735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:55:47.486573   51735 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:55:47.488134   51735 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:55:47.489381   51735 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:55:47.490933   51735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:55:47.492288   51735 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:55:47.530549   51735 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 17:55:47.531894   51735 start.go:297] selected driver: kvm2
	I0819 17:55:47.531915   51735 start.go:901] validating driver "kvm2" against <nil>
	I0819 17:55:47.531928   51735 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:55:47.532874   51735 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:55:47.532962   51735 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 17:55:47.551929   51735 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 17:55:47.551991   51735 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:55:47.552265   51735 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 17:55:47.552329   51735 cni.go:84] Creating CNI manager for ""
	I0819 17:55:47.552372   51735 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:55:47.552386   51735 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 17:55:47.552470   51735 start.go:340] cluster config:
	{Name:kubernetes-upgrade-415209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-415209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:55:47.552589   51735 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:55:47.554734   51735 out.go:177] * Starting "kubernetes-upgrade-415209" primary control-plane node in "kubernetes-upgrade-415209" cluster
	I0819 17:55:47.556102   51735 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:55:47.556139   51735 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:55:47.556152   51735 cache.go:56] Caching tarball of preloaded images
	I0819 17:55:47.556238   51735 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:55:47.556251   51735 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 17:55:47.556635   51735 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/config.json ...
	I0819 17:55:47.556659   51735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/config.json: {Name:mk8d6dc08d0f6d90c1e6f45e803c84fb80bda72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:55:47.556830   51735 start.go:360] acquireMachinesLock for kubernetes-upgrade-415209: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 17:55:47.556864   51735 start.go:364] duration metric: took 17.988µs to acquireMachinesLock for "kubernetes-upgrade-415209"
	I0819 17:55:47.556879   51735 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-415209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-415209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:55:47.556967   51735 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 17:55:47.558524   51735 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 17:55:47.558658   51735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:55:47.558698   51735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:55:47.573444   51735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0819 17:55:47.574002   51735 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:55:47.574612   51735 main.go:141] libmachine: Using API Version  1
	I0819 17:55:47.574635   51735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:55:47.575041   51735 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:55:47.575281   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetMachineName
	I0819 17:55:47.575437   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .DriverName
	I0819 17:55:47.575603   51735 start.go:159] libmachine.API.Create for "kubernetes-upgrade-415209" (driver="kvm2")
	I0819 17:55:47.575643   51735 client.go:168] LocalClient.Create starting
	I0819 17:55:47.575674   51735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 17:55:47.575709   51735 main.go:141] libmachine: Decoding PEM data...
	I0819 17:55:47.575731   51735 main.go:141] libmachine: Parsing certificate...
	I0819 17:55:47.575793   51735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 17:55:47.575817   51735 main.go:141] libmachine: Decoding PEM data...
	I0819 17:55:47.575840   51735 main.go:141] libmachine: Parsing certificate...
	I0819 17:55:47.575863   51735 main.go:141] libmachine: Running pre-create checks...
	I0819 17:55:47.575875   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .PreCreateCheck
	I0819 17:55:47.576297   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetConfigRaw
	I0819 17:55:47.576661   51735 main.go:141] libmachine: Creating machine...
	I0819 17:55:47.576673   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .Create
	I0819 17:55:47.576868   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Creating KVM machine...
	I0819 17:55:47.577972   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found existing default KVM network
	I0819 17:55:47.578698   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:47.578535   51794 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d960}
	I0819 17:55:47.578724   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | created network xml: 
	I0819 17:55:47.578738   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | <network>
	I0819 17:55:47.578751   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG |   <name>mk-kubernetes-upgrade-415209</name>
	I0819 17:55:47.578762   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG |   <dns enable='no'/>
	I0819 17:55:47.578770   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG |   
	I0819 17:55:47.578781   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 17:55:47.578789   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG |     <dhcp>
	I0819 17:55:47.578798   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 17:55:47.578807   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG |     </dhcp>
	I0819 17:55:47.578815   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG |   </ip>
	I0819 17:55:47.578821   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG |   
	I0819 17:55:47.578846   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | </network>
	I0819 17:55:47.578863   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | 
	I0819 17:55:47.583916   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | trying to create private KVM network mk-kubernetes-upgrade-415209 192.168.39.0/24...
	I0819 17:55:47.652596   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | private KVM network mk-kubernetes-upgrade-415209 192.168.39.0/24 created
	I0819 17:55:47.652636   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209 ...
	I0819 17:55:47.652649   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:47.652587   51794 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:55:47.652705   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 17:55:47.652743   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 17:55:47.886862   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:47.886696   51794 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/id_rsa...
	I0819 17:55:48.028915   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:48.028813   51794 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/kubernetes-upgrade-415209.rawdisk...
	I0819 17:55:48.028934   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Writing magic tar header
	I0819 17:55:48.028946   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Writing SSH key tar header
	I0819 17:55:48.029009   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:48.028979   51794 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209 ...
	I0819 17:55:48.029126   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209
	I0819 17:55:48.029149   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209 (perms=drwx------)
	I0819 17:55:48.029156   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 17:55:48.029163   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 17:55:48.029172   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:55:48.029185   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 17:55:48.029191   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 17:55:48.029200   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 17:55:48.029208   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 17:55:48.029217   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 17:55:48.029226   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 17:55:48.029235   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Creating domain...
	I0819 17:55:48.029244   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Checking permissions on dir: /home/jenkins
	I0819 17:55:48.029251   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Checking permissions on dir: /home
	I0819 17:55:48.029258   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Skipping /home - not owner
	I0819 17:55:48.030415   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) define libvirt domain using xml: 
	I0819 17:55:48.030438   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) <domain type='kvm'>
	I0819 17:55:48.030459   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   <name>kubernetes-upgrade-415209</name>
	I0819 17:55:48.030472   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   <memory unit='MiB'>2200</memory>
	I0819 17:55:48.030486   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   <vcpu>2</vcpu>
	I0819 17:55:48.030494   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   <features>
	I0819 17:55:48.030506   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <acpi/>
	I0819 17:55:48.030514   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <apic/>
	I0819 17:55:48.030520   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <pae/>
	I0819 17:55:48.030528   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     
	I0819 17:55:48.030534   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   </features>
	I0819 17:55:48.030541   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   <cpu mode='host-passthrough'>
	I0819 17:55:48.030550   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   
	I0819 17:55:48.030558   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   </cpu>
	I0819 17:55:48.030563   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   <os>
	I0819 17:55:48.030568   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <type>hvm</type>
	I0819 17:55:48.030573   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <boot dev='cdrom'/>
	I0819 17:55:48.030578   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <boot dev='hd'/>
	I0819 17:55:48.030584   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <bootmenu enable='no'/>
	I0819 17:55:48.030593   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   </os>
	I0819 17:55:48.030599   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   <devices>
	I0819 17:55:48.030606   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <disk type='file' device='cdrom'>
	I0819 17:55:48.030615   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/boot2docker.iso'/>
	I0819 17:55:48.030621   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <target dev='hdc' bus='scsi'/>
	I0819 17:55:48.030627   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <readonly/>
	I0819 17:55:48.030631   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     </disk>
	I0819 17:55:48.030640   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <disk type='file' device='disk'>
	I0819 17:55:48.030646   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 17:55:48.030671   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/kubernetes-upgrade-415209.rawdisk'/>
	I0819 17:55:48.030711   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <target dev='hda' bus='virtio'/>
	I0819 17:55:48.030722   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     </disk>
	I0819 17:55:48.030737   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <interface type='network'>
	I0819 17:55:48.030751   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <source network='mk-kubernetes-upgrade-415209'/>
	I0819 17:55:48.030768   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <model type='virtio'/>
	I0819 17:55:48.030777   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     </interface>
	I0819 17:55:48.030782   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <interface type='network'>
	I0819 17:55:48.030789   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <source network='default'/>
	I0819 17:55:48.030800   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <model type='virtio'/>
	I0819 17:55:48.030811   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     </interface>
	I0819 17:55:48.030820   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <serial type='pty'>
	I0819 17:55:48.030832   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <target port='0'/>
	I0819 17:55:48.030842   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     </serial>
	I0819 17:55:48.030851   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <console type='pty'>
	I0819 17:55:48.030862   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <target type='serial' port='0'/>
	I0819 17:55:48.030874   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     </console>
	I0819 17:55:48.030885   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     <rng model='virtio'>
	I0819 17:55:48.030918   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)       <backend model='random'>/dev/random</backend>
	I0819 17:55:48.030946   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     </rng>
	I0819 17:55:48.030959   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     
	I0819 17:55:48.030971   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)     
	I0819 17:55:48.030985   51735 main.go:141] libmachine: (kubernetes-upgrade-415209)   </devices>
	I0819 17:55:48.030995   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) </domain>
	I0819 17:55:48.031007   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) 
	I0819 17:55:48.035790   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:e5:be:44 in network default
	I0819 17:55:48.036331   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Ensuring networks are active...
	I0819 17:55:48.036373   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:48.037274   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Ensuring network default is active
	I0819 17:55:48.037681   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Ensuring network mk-kubernetes-upgrade-415209 is active
	I0819 17:55:48.038265   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Getting domain xml...
	I0819 17:55:48.039121   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Creating domain...
	I0819 17:55:49.291468   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Waiting to get IP...
	I0819 17:55:49.292153   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:49.292513   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:49.292556   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:49.292507   51794 retry.go:31] will retry after 216.751504ms: waiting for machine to come up
	I0819 17:55:49.510964   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:49.511348   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:49.511369   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:49.511298   51794 retry.go:31] will retry after 251.274005ms: waiting for machine to come up
	I0819 17:55:49.764766   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:49.765124   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:49.765153   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:49.765072   51794 retry.go:31] will retry after 466.189846ms: waiting for machine to come up
	I0819 17:55:50.232695   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:50.233249   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:50.233272   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:50.233208   51794 retry.go:31] will retry after 494.476308ms: waiting for machine to come up
	I0819 17:55:50.728848   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:50.729258   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:50.729287   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:50.729217   51794 retry.go:31] will retry after 566.431354ms: waiting for machine to come up
	I0819 17:55:51.297014   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:51.297456   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:51.297481   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:51.297424   51794 retry.go:31] will retry after 864.899681ms: waiting for machine to come up
	I0819 17:55:52.163949   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:52.164387   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:52.164422   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:52.164363   51794 retry.go:31] will retry after 793.562961ms: waiting for machine to come up
	I0819 17:55:52.959823   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:52.960289   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:52.960325   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:52.960263   51794 retry.go:31] will retry after 1.427185934s: waiting for machine to come up
	I0819 17:55:54.389877   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:54.390368   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:54.390396   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:54.390291   51794 retry.go:31] will retry after 1.227992346s: waiting for machine to come up
	I0819 17:55:55.619502   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:55.619820   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:55.619847   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:55.619773   51794 retry.go:31] will retry after 2.216410224s: waiting for machine to come up
	I0819 17:55:57.838274   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:55:57.838741   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:55:57.838769   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:55:57.838700   51794 retry.go:31] will retry after 2.594482634s: waiting for machine to come up
	I0819 17:56:00.434963   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:00.435461   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:56:00.435488   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:56:00.435400   51794 retry.go:31] will retry after 3.036243708s: waiting for machine to come up
	I0819 17:56:03.473450   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:03.473927   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:56:03.473954   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:56:03.473893   51794 retry.go:31] will retry after 3.169264191s: waiting for machine to come up
	I0819 17:56:06.645830   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:06.646205   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find current IP address of domain kubernetes-upgrade-415209 in network mk-kubernetes-upgrade-415209
	I0819 17:56:06.646265   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | I0819 17:56:06.646189   51794 retry.go:31] will retry after 4.388534612s: waiting for machine to come up
	I0819 17:56:11.037881   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.038332   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has current primary IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.038368   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Found IP for machine: 192.168.39.81
	I0819 17:56:11.038385   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Reserving static IP address...
	I0819 17:56:11.038763   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-415209", mac: "52:54:00:7a:14:03", ip: "192.168.39.81"} in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.111690   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Getting to WaitForSSH function...
	I0819 17:56:11.111715   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Reserved static IP address: 192.168.39.81
	I0819 17:56:11.111730   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Waiting for SSH to be available...
	I0819 17:56:11.114397   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.114986   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:11.115012   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.115179   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Using SSH client type: external
	I0819 17:56:11.115211   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/id_rsa (-rw-------)
	I0819 17:56:11.115251   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 17:56:11.115270   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | About to run SSH command:
	I0819 17:56:11.115284   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | exit 0
	I0819 17:56:11.240744   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | SSH cmd err, output: <nil>: 
	I0819 17:56:11.241066   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) KVM machine creation complete!
	I0819 17:56:11.241301   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetConfigRaw
	I0819 17:56:11.241926   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .DriverName
	I0819 17:56:11.242117   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .DriverName
	I0819 17:56:11.242278   51735 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 17:56:11.242293   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetState
	I0819 17:56:11.243460   51735 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 17:56:11.243477   51735 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 17:56:11.243485   51735 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 17:56:11.243494   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:11.245641   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.245989   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:11.246025   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.246086   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 17:56:11.246282   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:11.246462   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:11.246600   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 17:56:11.246778   51735 main.go:141] libmachine: Using SSH client type: native
	I0819 17:56:11.246966   51735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0819 17:56:11.246978   51735 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 17:56:11.355772   51735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:56:11.355793   51735 main.go:141] libmachine: Detecting the provisioner...
	I0819 17:56:11.355802   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:11.358803   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.359148   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:11.359178   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.359345   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 17:56:11.359551   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:11.359718   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:11.359864   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 17:56:11.360015   51735 main.go:141] libmachine: Using SSH client type: native
	I0819 17:56:11.360172   51735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0819 17:56:11.360182   51735 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 17:56:11.469039   51735 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 17:56:11.469120   51735 main.go:141] libmachine: found compatible host: buildroot
	I0819 17:56:11.469134   51735 main.go:141] libmachine: Provisioning with buildroot...
	I0819 17:56:11.469145   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetMachineName
	I0819 17:56:11.469428   51735 buildroot.go:166] provisioning hostname "kubernetes-upgrade-415209"
	I0819 17:56:11.469457   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetMachineName
	I0819 17:56:11.469671   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:11.472356   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.472693   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:11.472739   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.472812   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 17:56:11.472985   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:11.473150   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:11.473279   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 17:56:11.473519   51735 main.go:141] libmachine: Using SSH client type: native
	I0819 17:56:11.473703   51735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0819 17:56:11.473719   51735 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-415209 && echo "kubernetes-upgrade-415209" | sudo tee /etc/hostname
	I0819 17:56:11.594628   51735 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-415209
	
	I0819 17:56:11.594657   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:11.597720   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.598155   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:11.598183   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.598388   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 17:56:11.598634   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:11.598810   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:11.598945   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 17:56:11.599114   51735 main.go:141] libmachine: Using SSH client type: native
	I0819 17:56:11.599371   51735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0819 17:56:11.599405   51735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-415209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-415209/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-415209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:56:11.717102   51735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:56:11.717140   51735 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 17:56:11.717193   51735 buildroot.go:174] setting up certificates
	I0819 17:56:11.717211   51735 provision.go:84] configureAuth start
	I0819 17:56:11.717229   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetMachineName
	I0819 17:56:11.717519   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetIP
	I0819 17:56:11.720447   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.720807   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:11.720834   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.721006   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:11.723358   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.723663   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:11.723691   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.723848   51735 provision.go:143] copyHostCerts
	I0819 17:56:11.723928   51735 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 17:56:11.723950   51735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 17:56:11.724019   51735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 17:56:11.724131   51735 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 17:56:11.724140   51735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 17:56:11.724168   51735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 17:56:11.724240   51735 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 17:56:11.724249   51735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 17:56:11.724273   51735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 17:56:11.724377   51735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-415209 san=[127.0.0.1 192.168.39.81 kubernetes-upgrade-415209 localhost minikube]
	I0819 17:56:11.947350   51735 provision.go:177] copyRemoteCerts
	I0819 17:56:11.947409   51735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:56:11.947432   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:11.949942   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.950397   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:11.950432   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:11.950608   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 17:56:11.950848   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:11.950996   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 17:56:11.951092   51735 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/id_rsa Username:docker}
	I0819 17:56:12.034992   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0819 17:56:12.057959   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 17:56:12.081957   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:56:12.105216   51735 provision.go:87] duration metric: took 387.986262ms to configureAuth
	I0819 17:56:12.105247   51735 buildroot.go:189] setting minikube options for container-runtime
	I0819 17:56:12.105424   51735 config.go:182] Loaded profile config "kubernetes-upgrade-415209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 17:56:12.105501   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:12.108195   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.108594   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:12.108632   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.108769   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 17:56:12.108985   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:12.109157   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:12.109310   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 17:56:12.109490   51735 main.go:141] libmachine: Using SSH client type: native
	I0819 17:56:12.109656   51735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0819 17:56:12.109671   51735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:56:12.371116   51735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:56:12.371149   51735 main.go:141] libmachine: Checking connection to Docker...
	I0819 17:56:12.371158   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetURL
	I0819 17:56:12.372434   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Using libvirt version 6000000
	I0819 17:56:12.374741   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.375162   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:12.375203   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.375403   51735 main.go:141] libmachine: Docker is up and running!
	I0819 17:56:12.375419   51735 main.go:141] libmachine: Reticulating splines...
	I0819 17:56:12.375425   51735 client.go:171] duration metric: took 24.799774889s to LocalClient.Create
	I0819 17:56:12.375445   51735 start.go:167] duration metric: took 24.799843674s to libmachine.API.Create "kubernetes-upgrade-415209"
	I0819 17:56:12.375454   51735 start.go:293] postStartSetup for "kubernetes-upgrade-415209" (driver="kvm2")
	I0819 17:56:12.375464   51735 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:56:12.375479   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .DriverName
	I0819 17:56:12.375734   51735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:56:12.375763   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:12.377786   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.378139   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:12.378173   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.378328   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 17:56:12.378491   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:12.378654   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 17:56:12.378833   51735 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/id_rsa Username:docker}
	I0819 17:56:12.463765   51735 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:56:12.468271   51735 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 17:56:12.468323   51735 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 17:56:12.468407   51735 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 17:56:12.468547   51735 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 17:56:12.468666   51735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 17:56:12.478786   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:56:12.501355   51735 start.go:296] duration metric: took 125.886229ms for postStartSetup
	I0819 17:56:12.501440   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetConfigRaw
	I0819 17:56:12.502118   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetIP
	I0819 17:56:12.504861   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.505221   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:12.505261   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.505556   51735 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/config.json ...
	I0819 17:56:12.505787   51735 start.go:128] duration metric: took 24.948809058s to createHost
	I0819 17:56:12.505816   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:12.508204   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.508538   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:12.508575   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.508706   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 17:56:12.508876   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:12.509013   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:12.509125   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 17:56:12.509274   51735 main.go:141] libmachine: Using SSH client type: native
	I0819 17:56:12.509452   51735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0819 17:56:12.509467   51735 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 17:56:12.617241   51735 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090172.588617361
	
	I0819 17:56:12.617269   51735 fix.go:216] guest clock: 1724090172.588617361
	I0819 17:56:12.617279   51735 fix.go:229] Guest: 2024-08-19 17:56:12.588617361 +0000 UTC Remote: 2024-08-19 17:56:12.505801752 +0000 UTC m=+25.077949892 (delta=82.815609ms)
	I0819 17:56:12.617302   51735 fix.go:200] guest clock delta is within tolerance: 82.815609ms
	I0819 17:56:12.617320   51735 start.go:83] releasing machines lock for "kubernetes-upgrade-415209", held for 25.060436746s
	I0819 17:56:12.617364   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .DriverName
	I0819 17:56:12.617649   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetIP
	I0819 17:56:12.620516   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.620865   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:12.620898   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.621049   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .DriverName
	I0819 17:56:12.621616   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .DriverName
	I0819 17:56:12.621816   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .DriverName
	I0819 17:56:12.621905   51735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:56:12.621959   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:12.622013   51735 ssh_runner.go:195] Run: cat /version.json
	I0819 17:56:12.622036   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 17:56:12.624675   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.624938   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.625003   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:12.625031   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.625183   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 17:56:12.625353   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:12.625392   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:12.625445   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:12.625539   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 17:56:12.625552   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 17:56:12.625734   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 17:56:12.625732   51735 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/id_rsa Username:docker}
	I0819 17:56:12.625857   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 17:56:12.625988   51735 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/id_rsa Username:docker}
	I0819 17:56:12.745051   51735 ssh_runner.go:195] Run: systemctl --version
	I0819 17:56:12.751701   51735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:56:12.913215   51735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 17:56:12.919200   51735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 17:56:12.919267   51735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:56:12.934691   51735 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 17:56:12.934720   51735 start.go:495] detecting cgroup driver to use...
	I0819 17:56:12.934802   51735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:56:12.952204   51735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:56:12.965848   51735 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:56:12.965905   51735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:56:12.978181   51735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:56:12.990357   51735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:56:13.105276   51735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:56:13.274545   51735 docker.go:233] disabling docker service ...
	I0819 17:56:13.274602   51735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:56:13.291066   51735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:56:13.306434   51735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:56:13.443161   51735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:56:13.570211   51735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:56:13.584980   51735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:56:13.602643   51735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 17:56:13.602721   51735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:56:13.612892   51735 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:56:13.612973   51735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:56:13.623428   51735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:56:13.633697   51735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:56:13.643850   51735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:56:13.654365   51735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:56:13.663628   51735 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 17:56:13.663700   51735 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 17:56:13.678490   51735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:56:13.689471   51735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:56:13.813793   51735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:56:13.946345   51735 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:56:13.946439   51735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:56:13.951436   51735 start.go:563] Will wait 60s for crictl version
	I0819 17:56:13.951493   51735 ssh_runner.go:195] Run: which crictl
	I0819 17:56:13.955080   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:56:13.993994   51735 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 17:56:13.994065   51735 ssh_runner.go:195] Run: crio --version
	I0819 17:56:14.021653   51735 ssh_runner.go:195] Run: crio --version
	I0819 17:56:14.051624   51735 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 17:56:14.053035   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetIP
	I0819 17:56:14.055902   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:14.056338   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 18:56:01 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 17:56:14.056380   51735 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 17:56:14.056634   51735 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 17:56:14.062037   51735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:56:14.074385   51735 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-415209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-415209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:56:14.074482   51735 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:56:14.074535   51735 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:56:14.118320   51735 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 17:56:14.118422   51735 ssh_runner.go:195] Run: which lz4
	I0819 17:56:14.122124   51735 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 17:56:14.126247   51735 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 17:56:14.126288   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 17:56:15.648196   51735 crio.go:462] duration metric: took 1.52610652s to copy over tarball
	I0819 17:56:15.648291   51735 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 17:56:18.123041   51735 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.474705888s)
	I0819 17:56:18.123072   51735 crio.go:469] duration metric: took 2.474846202s to extract the tarball
	I0819 17:56:18.123081   51735 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 17:56:18.163892   51735 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:56:18.211046   51735 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 17:56:18.211069   51735 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 17:56:18.211138   51735 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 17:56:18.211165   51735 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 17:56:18.211175   51735 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 17:56:18.211195   51735 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 17:56:18.211194   51735 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 17:56:18.211223   51735 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 17:56:18.211276   51735 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 17:56:18.211145   51735 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:56:18.212782   51735 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 17:56:18.213142   51735 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 17:56:18.213357   51735 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:56:18.213367   51735 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 17:56:18.213382   51735 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 17:56:18.213430   51735 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 17:56:18.213361   51735 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 17:56:18.213370   51735 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 17:56:18.468499   51735 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 17:56:18.508863   51735 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 17:56:18.508899   51735 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 17:56:18.508935   51735 ssh_runner.go:195] Run: which crictl
	I0819 17:56:18.512585   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 17:56:18.543695   51735 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 17:56:18.545252   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 17:56:18.547000   51735 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 17:56:18.549090   51735 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 17:56:18.553822   51735 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 17:56:18.570058   51735 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 17:56:18.573474   51735 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 17:56:18.694727   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 17:56:18.694741   51735 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 17:56:18.694780   51735 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 17:56:18.694824   51735 ssh_runner.go:195] Run: which crictl
	I0819 17:56:18.720790   51735 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 17:56:18.720841   51735 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 17:56:18.720845   51735 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 17:56:18.720892   51735 ssh_runner.go:195] Run: which crictl
	I0819 17:56:18.720908   51735 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 17:56:18.720914   51735 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 17:56:18.720936   51735 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 17:56:18.720968   51735 ssh_runner.go:195] Run: which crictl
	I0819 17:56:18.720971   51735 ssh_runner.go:195] Run: which crictl
	I0819 17:56:18.721027   51735 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 17:56:18.721055   51735 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 17:56:18.721091   51735 ssh_runner.go:195] Run: which crictl
	I0819 17:56:18.725711   51735 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 17:56:18.725740   51735 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 17:56:18.725742   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 17:56:18.725775   51735 ssh_runner.go:195] Run: which crictl
	I0819 17:56:18.761307   51735 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 17:56:18.761389   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 17:56:18.761443   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 17:56:18.761476   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 17:56:18.761446   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 17:56:18.781488   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 17:56:18.781605   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 17:56:18.880778   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 17:56:18.880841   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 17:56:18.880859   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 17:56:18.880842   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 17:56:18.898248   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 17:56:18.898266   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 17:56:18.990294   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 17:56:19.003758   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 17:56:19.003788   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 17:56:19.003841   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 17:56:19.015580   51735 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 17:56:19.017704   51735 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 17:56:19.043620   51735 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:56:19.096645   51735 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 17:56:19.115801   51735 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 17:56:19.115945   51735 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 17:56:19.117849   51735 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 17:56:19.118045   51735 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 17:56:19.237886   51735 cache_images.go:92] duration metric: took 1.026800287s to LoadCachedImages
	W0819 17:56:19.237986   51735 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0819 17:56:19.238003   51735 kubeadm.go:934] updating node { 192.168.39.81 8443 v1.20.0 crio true true} ...
	I0819 17:56:19.238153   51735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-415209 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-415209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:56:19.238241   51735 ssh_runner.go:195] Run: crio config
	I0819 17:56:19.284548   51735 cni.go:84] Creating CNI manager for ""
	I0819 17:56:19.284573   51735 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 17:56:19.284585   51735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:56:19.284607   51735 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-415209 NodeName:kubernetes-upgrade-415209 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 17:56:19.284745   51735 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-415209"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:56:19.284836   51735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 17:56:19.294474   51735 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:56:19.294555   51735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:56:19.303196   51735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0819 17:56:19.319429   51735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:56:19.334857   51735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 17:56:19.350049   51735 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I0819 17:56:19.353650   51735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:56:19.365910   51735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:56:19.486550   51735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:56:19.503020   51735 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209 for IP: 192.168.39.81
	I0819 17:56:19.503047   51735 certs.go:194] generating shared ca certs ...
	I0819 17:56:19.503067   51735 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:56:19.503238   51735 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 17:56:19.503288   51735 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 17:56:19.503300   51735 certs.go:256] generating profile certs ...
	I0819 17:56:19.503380   51735 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/client.key
	I0819 17:56:19.503399   51735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/client.crt with IP's: []
	I0819 17:56:19.698437   51735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/client.crt ...
	I0819 17:56:19.698469   51735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/client.crt: {Name:mk6f4a83d77610f3708a68d0f4d1a99c676e4858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:56:19.698679   51735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/client.key ...
	I0819 17:56:19.698699   51735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/client.key: {Name:mk3f6a79c497fd221695041c19d5541793cbc720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:56:19.698816   51735 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.key.54e8b26e
	I0819 17:56:19.698858   51735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.crt.54e8b26e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.81]
	I0819 17:56:19.865052   51735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.crt.54e8b26e ...
	I0819 17:56:19.865081   51735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.crt.54e8b26e: {Name:mk91b12dafb7d555a385b50f9391f4ab48d2f47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:56:19.865283   51735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.key.54e8b26e ...
	I0819 17:56:19.865306   51735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.key.54e8b26e: {Name:mk8df87ab2a73e623c2aabe46bad82608d7a5204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:56:19.865415   51735 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.crt.54e8b26e -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.crt
	I0819 17:56:19.865532   51735 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.key.54e8b26e -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.key
	I0819 17:56:19.865632   51735 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/proxy-client.key
	I0819 17:56:19.865660   51735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/proxy-client.crt with IP's: []
	I0819 17:56:20.073758   51735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/proxy-client.crt ...
	I0819 17:56:20.073790   51735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/proxy-client.crt: {Name:mkf39bd61db7c45b7d003cbb494a929fbdca55c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:56:20.073973   51735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/proxy-client.key ...
	I0819 17:56:20.073993   51735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/proxy-client.key: {Name:mka4501446746e9c83684144a3c7034b20602c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:56:20.074183   51735 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 17:56:20.074236   51735 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 17:56:20.074248   51735 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 17:56:20.074282   51735 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:56:20.074315   51735 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:56:20.074347   51735 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 17:56:20.074496   51735 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 17:56:20.075144   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:56:20.101347   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:56:20.124262   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:56:20.146787   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:56:20.169316   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 17:56:20.191464   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 17:56:20.214163   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:56:20.236683   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kubernetes-upgrade-415209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 17:56:20.259225   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:56:20.281772   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 17:56:20.310223   51735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 17:56:20.340823   51735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:56:20.359372   51735 ssh_runner.go:195] Run: openssl version
	I0819 17:56:20.365483   51735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 17:56:20.378236   51735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 17:56:20.382920   51735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 17:56:20.382986   51735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 17:56:20.390023   51735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 17:56:20.405586   51735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:56:20.417108   51735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:56:20.421417   51735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:56:20.421472   51735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:56:20.426846   51735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:56:20.437405   51735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 17:56:20.448650   51735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 17:56:20.454103   51735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 17:56:20.454165   51735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 17:56:20.459520   51735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 17:56:20.470316   51735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:56:20.474149   51735 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:56:20.474212   51735 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-415209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-415209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:56:20.474306   51735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:56:20.474375   51735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:56:20.511262   51735 cri.go:89] found id: ""
	I0819 17:56:20.511344   51735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 17:56:20.521108   51735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 17:56:20.531542   51735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:56:20.541420   51735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:56:20.541445   51735 kubeadm.go:157] found existing configuration files:
	
	I0819 17:56:20.541489   51735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:56:20.550793   51735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:56:20.550852   51735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:56:20.560055   51735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:56:20.568544   51735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:56:20.568622   51735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:56:20.577595   51735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:56:20.587052   51735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:56:20.587126   51735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:56:20.596184   51735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:56:20.604994   51735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:56:20.605055   51735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:56:20.615635   51735 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 17:56:20.731175   51735 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 17:56:20.731252   51735 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:56:20.884346   51735 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:56:20.884471   51735 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:56:20.884571   51735 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 17:56:21.063533   51735 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:56:21.183657   51735 out.go:235]   - Generating certificates and keys ...
	I0819 17:56:21.183808   51735 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:56:21.183866   51735 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:56:21.358221   51735 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 17:56:21.573866   51735 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 17:56:21.913994   51735 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 17:56:22.130484   51735 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 17:56:22.300714   51735 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 17:56:22.300956   51735 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-415209 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
	I0819 17:56:22.538823   51735 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 17:56:22.539081   51735 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-415209 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
	I0819 17:56:22.663067   51735 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 17:56:23.042728   51735 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 17:56:23.106981   51735 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 17:56:23.107105   51735 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:56:23.361370   51735 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:56:23.487499   51735 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:56:23.653063   51735 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:56:23.899797   51735 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:56:23.914395   51735 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:56:23.915366   51735 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:56:23.915421   51735 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:56:24.033625   51735 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:56:24.038006   51735 out.go:235]   - Booting up control plane ...
	I0819 17:56:24.038135   51735 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:56:24.040730   51735 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:56:24.043616   51735 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:56:24.044588   51735 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:56:24.049899   51735 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 17:57:04.041817   51735 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 17:57:04.042152   51735 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 17:57:04.042401   51735 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 17:57:09.042439   51735 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 17:57:09.042743   51735 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 17:57:19.042994   51735 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 17:57:19.043241   51735 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 17:57:39.043999   51735 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 17:57:39.044277   51735 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 17:58:19.046133   51735 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 17:58:19.046361   51735 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 17:58:19.046373   51735 kubeadm.go:310] 
	I0819 17:58:19.046422   51735 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 17:58:19.046485   51735 kubeadm.go:310] 		timed out waiting for the condition
	I0819 17:58:19.046493   51735 kubeadm.go:310] 
	I0819 17:58:19.046539   51735 kubeadm.go:310] 	This error is likely caused by:
	I0819 17:58:19.046580   51735 kubeadm.go:310] 		- The kubelet is not running
	I0819 17:58:19.046723   51735 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 17:58:19.046730   51735 kubeadm.go:310] 
	I0819 17:58:19.046870   51735 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 17:58:19.046915   51735 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 17:58:19.046961   51735 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 17:58:19.046968   51735 kubeadm.go:310] 
	I0819 17:58:19.047113   51735 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 17:58:19.047221   51735 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 17:58:19.047227   51735 kubeadm.go:310] 
	I0819 17:58:19.047360   51735 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 17:58:19.047479   51735 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 17:58:19.047577   51735 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 17:58:19.047676   51735 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 17:58:19.047684   51735 kubeadm.go:310] 
	I0819 17:58:19.049667   51735 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 17:58:19.049794   51735 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 17:58:19.049878   51735 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0819 17:58:19.050047   51735 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-415209 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-415209 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-415209 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-415209 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 17:58:19.050096   51735 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 17:58:19.646655   51735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:58:19.661944   51735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:58:19.672091   51735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:58:19.672116   51735 kubeadm.go:157] found existing configuration files:
	
	I0819 17:58:19.672171   51735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:58:19.681628   51735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:58:19.681704   51735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:58:19.690938   51735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:58:19.704208   51735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:58:19.704305   51735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:58:19.716345   51735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:58:19.728547   51735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:58:19.728706   51735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:58:19.741190   51735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:58:19.751238   51735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:58:19.751306   51735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:58:19.764962   51735 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 17:58:19.842921   51735 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 17:58:19.843039   51735 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:58:20.019503   51735 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:58:20.019740   51735 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:58:20.019951   51735 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 17:58:20.240651   51735 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:58:20.242650   51735 out.go:235]   - Generating certificates and keys ...
	I0819 17:58:20.242753   51735 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:58:20.242838   51735 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:58:20.242968   51735 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 17:58:20.243058   51735 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 17:58:20.243216   51735 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 17:58:20.243314   51735 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 17:58:20.243441   51735 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 17:58:20.243548   51735 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 17:58:20.243681   51735 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 17:58:20.243793   51735 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 17:58:20.243847   51735 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 17:58:20.243924   51735 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:58:20.386715   51735 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:58:20.482095   51735 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:58:20.868442   51735 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:58:20.938899   51735 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:58:20.958101   51735 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:58:20.959515   51735 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:58:20.959580   51735 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:58:21.152311   51735 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:58:21.154069   51735 out.go:235]   - Booting up control plane ...
	I0819 17:58:21.154200   51735 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:58:21.161098   51735 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:58:21.161648   51735 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:58:21.164470   51735 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:58:21.167802   51735 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 17:59:01.166360   51735 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 17:59:01.166474   51735 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 17:59:01.166764   51735 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 17:59:06.166754   51735 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 17:59:06.166970   51735 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 17:59:16.167120   51735 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 17:59:16.167348   51735 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 17:59:36.167615   51735 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 17:59:36.167907   51735 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:00:16.169471   51735 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:00:16.169764   51735 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:00:16.169784   51735 kubeadm.go:310] 
	I0819 18:00:16.169844   51735 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:00:16.169908   51735 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:00:16.169919   51735 kubeadm.go:310] 
	I0819 18:00:16.169995   51735 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:00:16.170051   51735 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:00:16.170209   51735 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:00:16.170225   51735 kubeadm.go:310] 
	I0819 18:00:16.170366   51735 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:00:16.170424   51735 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:00:16.170472   51735 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:00:16.170484   51735 kubeadm.go:310] 
	I0819 18:00:16.170639   51735 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:00:16.170778   51735 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:00:16.170798   51735 kubeadm.go:310] 
	I0819 18:00:16.170964   51735 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:00:16.171100   51735 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:00:16.171217   51735 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:00:16.171327   51735 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:00:16.171347   51735 kubeadm.go:310] 
	I0819 18:00:16.172140   51735 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:00:16.172243   51735 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:00:16.172329   51735 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:00:16.172400   51735 kubeadm.go:394] duration metric: took 3m55.698192763s to StartCluster
	I0819 18:00:16.172443   51735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:00:16.172496   51735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:00:16.218124   51735 cri.go:89] found id: ""
	I0819 18:00:16.218165   51735 logs.go:276] 0 containers: []
	W0819 18:00:16.218177   51735 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:00:16.218184   51735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:00:16.218244   51735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:00:16.259244   51735 cri.go:89] found id: ""
	I0819 18:00:16.259274   51735 logs.go:276] 0 containers: []
	W0819 18:00:16.259286   51735 logs.go:278] No container was found matching "etcd"
	I0819 18:00:16.259293   51735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:00:16.259365   51735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:00:16.330878   51735 cri.go:89] found id: ""
	I0819 18:00:16.330908   51735 logs.go:276] 0 containers: []
	W0819 18:00:16.330918   51735 logs.go:278] No container was found matching "coredns"
	I0819 18:00:16.330926   51735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:00:16.330993   51735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:00:16.367112   51735 cri.go:89] found id: ""
	I0819 18:00:16.367143   51735 logs.go:276] 0 containers: []
	W0819 18:00:16.367151   51735 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:00:16.367160   51735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:00:16.367229   51735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:00:16.415267   51735 cri.go:89] found id: ""
	I0819 18:00:16.415302   51735 logs.go:276] 0 containers: []
	W0819 18:00:16.415313   51735 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:00:16.415322   51735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:00:16.415386   51735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:00:16.459989   51735 cri.go:89] found id: ""
	I0819 18:00:16.460019   51735 logs.go:276] 0 containers: []
	W0819 18:00:16.460030   51735 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:00:16.460038   51735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:00:16.460106   51735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:00:16.498731   51735 cri.go:89] found id: ""
	I0819 18:00:16.498762   51735 logs.go:276] 0 containers: []
	W0819 18:00:16.498774   51735 logs.go:278] No container was found matching "kindnet"
	I0819 18:00:16.498787   51735 logs.go:123] Gathering logs for kubelet ...
	I0819 18:00:16.498803   51735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:00:16.556981   51735 logs.go:123] Gathering logs for dmesg ...
	I0819 18:00:16.557019   51735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:00:16.572367   51735 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:00:16.572406   51735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:00:16.727112   51735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:00:16.727143   51735 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:00:16.727161   51735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:00:16.833146   51735 logs.go:123] Gathering logs for container status ...
	I0819 18:00:16.833179   51735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 18:00:16.874962   51735 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 18:00:16.875017   51735 out.go:270] * 
	* 
	W0819 18:00:16.875082   51735 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:00:16.875101   51735 out.go:270] * 
	* 
	W0819 18:00:16.876238   51735 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:00:16.879109   51735 out.go:201] 
	W0819 18:00:16.880532   51735 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:00:16.880579   51735 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 18:00:16.880596   51735 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 18:00:16.882132   51735 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-415209 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-415209
E0819 18:00:21.262837   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-415209: (6.333367761s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-415209 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-415209 status --format={{.Host}}: exit status 7 (66.094079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-415209 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-415209 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.675551313s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-415209 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-415209 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-415209 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (76.15103ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-415209] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-415209
	    minikube start -p kubernetes-upgrade-415209 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4152092 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-415209 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-415209 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-415209 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.703833387s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-19 18:02:03.855532924 +0000 UTC m=+4185.559700817
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-415209 -n kubernetes-upgrade-415209
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-415209 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-415209 logs -n 25: (1.639156209s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-321572 sudo cat              | cilium-321572             | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-321572 sudo cat              | cilium-321572             | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-321572 sudo                  | cilium-321572             | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-321572 sudo                  | cilium-321572             | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-321572 sudo                  | cilium-321572             | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-321572 sudo find             | cilium-321572             | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-321572 sudo crio             | cilium-321572             | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-321572                       | cilium-321572             | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC | 19 Aug 24 17:59 UTC |
	| start   | -p cert-expiration-975771              | cert-expiration-975771    | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC | 19 Aug 24 18:00 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-415209           | kubernetes-upgrade-415209 | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	| start   | -p kubernetes-upgrade-415209           | kubernetes-upgrade-415209 | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:01 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-380066            | force-systemd-env-380066  | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	| start   | -p force-systemd-flag-170488           | force-systemd-flag-170488 | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:01 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-164373                        | pause-164373              | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:01 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-415209           | kubernetes-upgrade-415209 | jenkins | v1.33.1 | 19 Aug 24 18:01 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-415209           | kubernetes-upgrade-415209 | jenkins | v1.33.1 | 19 Aug 24 18:01 UTC | 19 Aug 24 18:02 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-170488 ssh cat      | force-systemd-flag-170488 | jenkins | v1.33.1 | 19 Aug 24 18:01 UTC | 19 Aug 24 18:01 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-170488           | force-systemd-flag-170488 | jenkins | v1.33.1 | 19 Aug 24 18:01 UTC | 19 Aug 24 18:01 UTC |
	| start   | -p cert-options-948260                 | cert-options-948260       | jenkins | v1.33.1 | 19 Aug 24 18:01 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| pause   | -p pause-164373                        | pause-164373              | jenkins | v1.33.1 | 19 Aug 24 18:01 UTC | 19 Aug 24 18:01 UTC |
	|         | --alsologtostderr -v=5                 |                           |         |         |                     |                     |
	| unpause | -p pause-164373                        | pause-164373              | jenkins | v1.33.1 | 19 Aug 24 18:01 UTC | 19 Aug 24 18:02 UTC |
	|         | --alsologtostderr -v=5                 |                           |         |         |                     |                     |
	| pause   | -p pause-164373                        | pause-164373              | jenkins | v1.33.1 | 19 Aug 24 18:02 UTC | 19 Aug 24 18:02 UTC |
	|         | --alsologtostderr -v=5                 |                           |         |         |                     |                     |
	| delete  | -p pause-164373                        | pause-164373              | jenkins | v1.33.1 | 19 Aug 24 18:02 UTC | 19 Aug 24 18:02 UTC |
	|         | --alsologtostderr -v=5                 |                           |         |         |                     |                     |
	| delete  | -p pause-164373                        | pause-164373              | jenkins | v1.33.1 | 19 Aug 24 18:02 UTC | 19 Aug 24 18:02 UTC |
	| start   | -p old-k8s-version-079123              | old-k8s-version-079123    | jenkins | v1.33.1 | 19 Aug 24 18:02 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:02:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:02:02.810247   59547 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:02:02.810366   59547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:02:02.810376   59547 out.go:358] Setting ErrFile to fd 2...
	I0819 18:02:02.810382   59547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:02:02.810697   59547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:02:02.811347   59547 out.go:352] Setting JSON to false
	I0819 18:02:02.812502   59547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6268,"bootTime":1724084255,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:02:02.812585   59547 start.go:139] virtualization: kvm guest
	I0819 18:02:02.814789   59547 out.go:177] * [old-k8s-version-079123] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:02:02.816120   59547 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:02:02.816181   59547 notify.go:220] Checking for updates...
	I0819 18:02:02.818843   59547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:02:02.820246   59547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:02:02.821654   59547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:02:02.822956   59547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:02:02.824159   59547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:02:02.825932   59547 config.go:182] Loaded profile config "cert-expiration-975771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:02.826072   59547 config.go:182] Loaded profile config "cert-options-948260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:02.826187   59547 config.go:182] Loaded profile config "kubernetes-upgrade-415209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:02.826295   59547 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:02:02.865548   59547 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:02:02.866879   59547 start.go:297] selected driver: kvm2
	I0819 18:02:02.866899   59547 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:02:02.866911   59547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:02:02.867596   59547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:02:02.867671   59547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:02:02.884258   59547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:02:02.884309   59547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:02:02.884518   59547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:02:02.884553   59547 cni.go:84] Creating CNI manager for ""
	I0819 18:02:02.884560   59547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:02:02.884567   59547 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 18:02:02.884615   59547 start.go:340] cluster config:
	{Name:old-k8s-version-079123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-079123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:02:02.884722   59547 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:02:02.886644   59547 out.go:177] * Starting "old-k8s-version-079123" primary control-plane node in "old-k8s-version-079123" cluster
	I0819 18:02:03.079297   58977 main.go:141] libmachine: (cert-options-948260) DBG | domain cert-options-948260 has defined MAC address 52:54:00:cc:3a:44 in network mk-cert-options-948260
	I0819 18:02:03.079802   58977 main.go:141] libmachine: (cert-options-948260) Found IP for machine: 192.168.83.250
	I0819 18:02:03.079826   58977 main.go:141] libmachine: (cert-options-948260) Reserving static IP address...
	I0819 18:02:03.079840   58977 main.go:141] libmachine: (cert-options-948260) DBG | domain cert-options-948260 has current primary IP address 192.168.83.250 and MAC address 52:54:00:cc:3a:44 in network mk-cert-options-948260
	I0819 18:02:03.080137   58977 main.go:141] libmachine: (cert-options-948260) DBG | unable to find host DHCP lease matching {name: "cert-options-948260", mac: "52:54:00:cc:3a:44", ip: "192.168.83.250"} in network mk-cert-options-948260
	I0819 18:02:03.158446   58977 main.go:141] libmachine: (cert-options-948260) DBG | Getting to WaitForSSH function...
	I0819 18:02:03.158462   58977 main.go:141] libmachine: (cert-options-948260) Reserved static IP address: 192.168.83.250
	I0819 18:02:03.158476   58977 main.go:141] libmachine: (cert-options-948260) Waiting for SSH to be available...
	I0819 18:02:03.161203   58977 main.go:141] libmachine: (cert-options-948260) DBG | domain cert-options-948260 has defined MAC address 52:54:00:cc:3a:44 in network mk-cert-options-948260
	I0819 18:02:03.161559   58977 main.go:141] libmachine: (cert-options-948260) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:cc:3a:44", ip: ""} in network mk-cert-options-948260
	I0819 18:02:03.161661   58977 main.go:141] libmachine: (cert-options-948260) DBG | unable to find defined IP address of network mk-cert-options-948260 interface with MAC address 52:54:00:cc:3a:44
	I0819 18:02:03.161765   58977 main.go:141] libmachine: (cert-options-948260) DBG | Using SSH client type: external
	I0819 18:02:03.161784   58977 main.go:141] libmachine: (cert-options-948260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/cert-options-948260/id_rsa (-rw-------)
	I0819 18:02:03.161857   58977 main.go:141] libmachine: (cert-options-948260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/cert-options-948260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:02:03.161875   58977 main.go:141] libmachine: (cert-options-948260) DBG | About to run SSH command:
	I0819 18:02:03.161893   58977 main.go:141] libmachine: (cert-options-948260) DBG | exit 0
	I0819 18:02:03.165714   58977 main.go:141] libmachine: (cert-options-948260) DBG | SSH cmd err, output: exit status 255: 
	I0819 18:02:03.165732   58977 main.go:141] libmachine: (cert-options-948260) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0819 18:02:03.165753   58977 main.go:141] libmachine: (cert-options-948260) DBG | command : exit 0
	I0819 18:02:03.165760   58977 main.go:141] libmachine: (cert-options-948260) DBG | err     : exit status 255
	I0819 18:02:03.165773   58977 main.go:141] libmachine: (cert-options-948260) DBG | output  : 
	I0819 18:02:02.790473   58640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:02:02.790493   58640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:02:02.790517   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 18:02:02.794257   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 18:02:02.794910   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 19:00:44 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 18:02:02.794935   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 18:02:02.795148   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 18:02:02.795349   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 18:02:02.795550   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 18:02:02.795709   58640 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/id_rsa Username:docker}
	I0819 18:02:02.807067   58640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0819 18:02:02.807676   58640 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:02:02.808324   58640 main.go:141] libmachine: Using API Version  1
	I0819 18:02:02.808340   58640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:02:02.808807   58640 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:02:02.809016   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetState
	I0819 18:02:02.811133   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .DriverName
	I0819 18:02:02.811392   58640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:02:02.811405   58640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:02:02.811423   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHHostname
	I0819 18:02:02.814543   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 18:02:02.814954   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:14:03", ip: ""} in network mk-kubernetes-upgrade-415209: {Iface:virbr1 ExpiryTime:2024-08-19 19:00:44 +0000 UTC Type:0 Mac:52:54:00:7a:14:03 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:kubernetes-upgrade-415209 Clientid:01:52:54:00:7a:14:03}
	I0819 18:02:02.814979   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | domain kubernetes-upgrade-415209 has defined IP address 192.168.39.81 and MAC address 52:54:00:7a:14:03 in network mk-kubernetes-upgrade-415209
	I0819 18:02:02.815207   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHPort
	I0819 18:02:02.815371   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHKeyPath
	I0819 18:02:02.815503   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .GetSSHUsername
	I0819 18:02:02.815630   58640 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/kubernetes-upgrade-415209/id_rsa Username:docker}
	I0819 18:02:02.940051   58640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:02:02.957537   58640 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:02:02.957616   58640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:02:02.972456   58640 api_server.go:72] duration metric: took 231.580039ms to wait for apiserver process to appear ...
	I0819 18:02:02.972488   58640 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:02:02.972513   58640 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0819 18:02:02.982425   58640 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I0819 18:02:02.983630   58640 api_server.go:141] control plane version: v1.31.0
	I0819 18:02:02.983658   58640 api_server.go:131] duration metric: took 11.156074ms to wait for apiserver health ...
	I0819 18:02:02.983669   58640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:02:02.991527   58640 system_pods.go:59] 8 kube-system pods found
	I0819 18:02:02.991556   58640 system_pods.go:61] "coredns-6f6b679f8f-6hvhs" [192399ae-50bf-4a56-a6e0-fd7fa3cc2d79] Running
	I0819 18:02:02.991561   58640 system_pods.go:61] "coredns-6f6b679f8f-m7tnl" [214207fe-b389-47ea-9d6d-003b712e21d7] Running
	I0819 18:02:02.991567   58640 system_pods.go:61] "etcd-kubernetes-upgrade-415209" [3ba08727-a36a-4b6c-93d1-093ade875ba4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 18:02:02.991574   58640 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-415209" [e5b23835-6764-46ac-8627-22c92537cf95] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 18:02:02.991582   58640 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-415209" [77981af7-cca5-48f9-a474-1a726e206bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 18:02:02.991586   58640 system_pods.go:61] "kube-proxy-gfttw" [c8bc7583-8aaf-4adf-aa57-ff2c2ab588d3] Running
	I0819 18:02:02.991592   58640 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-415209" [d8efcf41-df7d-4f8e-83d0-2314184a38c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 18:02:02.991595   58640 system_pods.go:61] "storage-provisioner" [e010ccc3-6784-4fc4-8ac6-c3601bfe31a3] Running
	I0819 18:02:02.991601   58640 system_pods.go:74] duration metric: took 7.926664ms to wait for pod list to return data ...
	I0819 18:02:02.991610   58640 kubeadm.go:582] duration metric: took 250.738597ms to wait for: map[apiserver:true system_pods:true]
	I0819 18:02:02.991625   58640 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:02:02.994527   58640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:02:02.994545   58640 node_conditions.go:123] node cpu capacity is 2
	I0819 18:02:02.994553   58640 node_conditions.go:105] duration metric: took 2.924379ms to run NodePressure ...
	I0819 18:02:02.994563   58640 start.go:241] waiting for startup goroutines ...
	I0819 18:02:03.044943   58640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:02:03.064565   58640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:02:03.279062   58640 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:03.279092   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .Close
	I0819 18:02:03.279447   58640 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:03.279469   58640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:03.279478   58640 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:03.279486   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .Close
	I0819 18:02:03.279448   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Closing plugin on server side
	I0819 18:02:03.279748   58640 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:03.279766   58640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:03.290731   58640 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:03.290749   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .Close
	I0819 18:02:03.290986   58640 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:03.291000   58640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:03.789869   58640 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:03.789893   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .Close
	I0819 18:02:03.790149   58640 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:03.790197   58640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:03.790211   58640 main.go:141] libmachine: Making call to close driver server
	I0819 18:02:03.790159   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) DBG | Closing plugin on server side
	I0819 18:02:03.790224   58640 main.go:141] libmachine: (kubernetes-upgrade-415209) Calling .Close
	I0819 18:02:03.790526   58640 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:02:03.790537   58640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:02:03.792877   58640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 18:02:03.794178   58640 addons.go:510] duration metric: took 1.053286932s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 18:02:03.794209   58640 start.go:246] waiting for cluster config update ...
	I0819 18:02:03.794219   58640 start.go:255] writing updated cluster config ...
	I0819 18:02:03.794431   58640 ssh_runner.go:195] Run: rm -f paused
	I0819 18:02:03.841427   58640 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:02:03.843432   58640 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-415209" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.531816602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090524531794423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d1dab51-632c-4980-8d96-a4cd707a6dca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.532289457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5552b9d-33e8-4ee9-93d7-7883089f9601 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.532365390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5552b9d-33e8-4ee9-93d7-7883089f9601 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.533111922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aff04e7c6f755e7deb5788ea173af5634953b5be35d038047f4a2902a209042d,PodSandboxId:509c547451aa8d9854730650c6d429606add9ec9ca199ebae1538eab775f10d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724090521543592311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e010ccc3-6784-4fc4-8ac6-c3601bfe31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a48813ea7c603cf1f2562ce43721dcba88d4e89571cb56da91c9287950ece8,PodSandboxId:82d4d835b148ad72fffa9e791f42009ec593f2d39e0e9224e18b4aa890b93d71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090517741445685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6effab939260143f0cae596c12af7a37,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0889f39e902b3637361264d727e033982fe41e2acb1957611c38f67bcdc5a21,PodSandboxId:1ad926e6b70f00a3409c4103c5cf35735b25f54e58075932451962a87d9dfb62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090517730551699,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2d76e892c5018a58a1515d4cc893eb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549b332148400afafc83ba89d2d516152a0e1400d5890cd03c2cf7c004e738ef,PodSandboxId:d861c89dd398074cfe5609603502ec6444a9bda806b6c4884f85058752d55a01,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090517719438316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ef607ecdc5d5048dbd539e5b7dbe126,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4445a6aadf7c24d19f462636ac91283e94e21059061f66d534a35131df3bd909,PodSandboxId:47be6974c964107f9c47f06c6a58d5cc60635b4c248e69b61dee287a7bfc1d04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090514707404310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc4baed5531069f72344c61a236e079,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ed377616030bc2029c6fb29a08691a58603fd14212232634bb14ffddedb3,PodSandboxId:ff39b90e5d9b1dd7f19c84398e0f3d185685f066b1f0698948fe06cdc25618ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090495738488952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6hvhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192399ae-50bf-4a56-a6e0-fd7fa3cc2d79,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b984c0263c4f2eb936dc659778d80d8d71f4417667291381d2278428fed760c,PodSandboxId:2bbdda436b3ffe83621d5133aa3b3bfd7c02af2554240aaf16714d5fdbfec174,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724090494664885524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfttw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8bc7583-8aaf-4adf
-aa57-ff2c2ab588d3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36e1fe7e5a4957aa704dafce82dfb07ecb341d2daa35fac7291ba806a1ec22e,PodSandboxId:e940f24f8176fab4a916b5a07ee950e650cc09912af7e6f2559e3da43cbf2625,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090495666892561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m7tnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214207fe-b389-47ea-9d6d-003b712e21d7,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b389437911c6712d32111bb3ed6769b145cc5c8c90791c1edeb6b1c2c825db2,PodSandboxId:509c547451aa8d9854730650c6d429606add9ec9ca199ebae1538eab775f10d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724090494587876944,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e010ccc3-6784-4fc4-8ac6-c3601bfe31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539a13def6e0208747e810aa688c7f5c4cc2d04eb71e5177a21b9143cdbb5443,PodSandboxId:d861c89dd398074cfe5609603502ec6444a9bda806b6c4884f85058752d55a01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090494426541055,Labels:map[string]string{io.kubernetes.container
.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ef607ecdc5d5048dbd539e5b7dbe126,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d479f9e70a0c4c076da0e910b0efcdd04a5443104a6e0a696cfbe090f45c2c50,PodSandboxId:47be6974c964107f9c47f06c6a58d5cc60635b4c248e69b61dee287a7bfc1d04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724090494400982146,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc4baed5531069f72344c61a236e079,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3872e7369ccec420a99b1a6e99e0e605c409a69c5069c19f1de96810cc92182e,PodSandboxId:82d4d835b148ad72fffa9e791f42009ec593f2d39e0e9224e18b4aa890b93d71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724090494363409368,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6effab939260143f0cae596c12af7a37,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daf25e5ac8dd8eff20a3072e72d83126c01cb15c814992b8025e9c2bd400d12,PodSandboxId:1ad926e6b70f00a3409c4103c5cf35735b25f54e58075932451962a87d9dfb62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724090494316881680,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2d76e892c5018a58a1515d4cc893eb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444da43334ede1632a0e1fe511c7495fb0d20bc738b1816f089e0799b0689a19,PodSandboxId:128f2ba3964caf00850cf2b666e426fd2e25039d9ce811e6e28e5e6616c494ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090476543448832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubern
etes.pod.name: coredns-6f6b679f8f-6hvhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192399ae-50bf-4a56-a6e0-fd7fa3cc2d79,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbc829741236ef3ad398074d29a76cb934a957b4fe956f9a5f53ec1d8af902c,PodSandboxId:0655a8a77192bb76b7872e7d9029eea346b4508a1882fa4ec857b048b92fbae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090476493080825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m7tnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214207fe-b389-47ea-9d6d-003b712e21d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6135f1328c820c68c5cd034b003845151f0ba3e879beef2170cb3b3350812829,PodSandboxId:63abda04ee088b31ebefa6df7dbd90ea8ad7d671bcb12b4b9a15a6f6bcd267b5,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090475982180553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfttw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8bc7583-8aaf-4adf-aa57-ff2c2ab588d3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5552b9d-33e8-4ee9-93d7-7883089f9601 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.597619226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e82a6a2e-27fe-4edf-8073-840a7f06b694 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.597700898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e82a6a2e-27fe-4edf-8073-840a7f06b694 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.607543744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0baafa77-b8ff-437e-a9b6-d015aaecd262 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.607921150Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090524607895814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0baafa77-b8ff-437e-a9b6-d015aaecd262 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.608502553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b062ffb9-d0d9-40ee-9ec2-5e4e87f7096b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.608571352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b062ffb9-d0d9-40ee-9ec2-5e4e87f7096b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.608928846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aff04e7c6f755e7deb5788ea173af5634953b5be35d038047f4a2902a209042d,PodSandboxId:509c547451aa8d9854730650c6d429606add9ec9ca199ebae1538eab775f10d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724090521543592311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e010ccc3-6784-4fc4-8ac6-c3601bfe31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a48813ea7c603cf1f2562ce43721dcba88d4e89571cb56da91c9287950ece8,PodSandboxId:82d4d835b148ad72fffa9e791f42009ec593f2d39e0e9224e18b4aa890b93d71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090517741445685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6effab939260143f0cae596c12af7a37,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0889f39e902b3637361264d727e033982fe41e2acb1957611c38f67bcdc5a21,PodSandboxId:1ad926e6b70f00a3409c4103c5cf35735b25f54e58075932451962a87d9dfb62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090517730551699,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2d76e892c5018a58a1515d4cc893eb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549b332148400afafc83ba89d2d516152a0e1400d5890cd03c2cf7c004e738ef,PodSandboxId:d861c89dd398074cfe5609603502ec6444a9bda806b6c4884f85058752d55a01,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090517719438316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ef607ecdc5d5048dbd539e5b7dbe126,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4445a6aadf7c24d19f462636ac91283e94e21059061f66d534a35131df3bd909,PodSandboxId:47be6974c964107f9c47f06c6a58d5cc60635b4c248e69b61dee287a7bfc1d04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090514707404310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc4baed5531069f72344c61a236e079,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ed377616030bc2029c6fb29a08691a58603fd14212232634bb14ffddedb3,PodSandboxId:ff39b90e5d9b1dd7f19c84398e0f3d185685f066b1f0698948fe06cdc25618ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090495738488952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6hvhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192399ae-50bf-4a56-a6e0-fd7fa3cc2d79,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b984c0263c4f2eb936dc659778d80d8d71f4417667291381d2278428fed760c,PodSandboxId:2bbdda436b3ffe83621d5133aa3b3bfd7c02af2554240aaf16714d5fdbfec174,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724090494664885524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfttw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8bc7583-8aaf-4adf
-aa57-ff2c2ab588d3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36e1fe7e5a4957aa704dafce82dfb07ecb341d2daa35fac7291ba806a1ec22e,PodSandboxId:e940f24f8176fab4a916b5a07ee950e650cc09912af7e6f2559e3da43cbf2625,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090495666892561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m7tnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214207fe-b389-47ea-9d6d-003b712e21d7,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b389437911c6712d32111bb3ed6769b145cc5c8c90791c1edeb6b1c2c825db2,PodSandboxId:509c547451aa8d9854730650c6d429606add9ec9ca199ebae1538eab775f10d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724090494587876944,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e010ccc3-6784-4fc4-8ac6-c3601bfe31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539a13def6e0208747e810aa688c7f5c4cc2d04eb71e5177a21b9143cdbb5443,PodSandboxId:d861c89dd398074cfe5609603502ec6444a9bda806b6c4884f85058752d55a01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090494426541055,Labels:map[string]string{io.kubernetes.container
.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ef607ecdc5d5048dbd539e5b7dbe126,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d479f9e70a0c4c076da0e910b0efcdd04a5443104a6e0a696cfbe090f45c2c50,PodSandboxId:47be6974c964107f9c47f06c6a58d5cc60635b4c248e69b61dee287a7bfc1d04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724090494400982146,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc4baed5531069f72344c61a236e079,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3872e7369ccec420a99b1a6e99e0e605c409a69c5069c19f1de96810cc92182e,PodSandboxId:82d4d835b148ad72fffa9e791f42009ec593f2d39e0e9224e18b4aa890b93d71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724090494363409368,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6effab939260143f0cae596c12af7a37,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daf25e5ac8dd8eff20a3072e72d83126c01cb15c814992b8025e9c2bd400d12,PodSandboxId:1ad926e6b70f00a3409c4103c5cf35735b25f54e58075932451962a87d9dfb62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724090494316881680,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2d76e892c5018a58a1515d4cc893eb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444da43334ede1632a0e1fe511c7495fb0d20bc738b1816f089e0799b0689a19,PodSandboxId:128f2ba3964caf00850cf2b666e426fd2e25039d9ce811e6e28e5e6616c494ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090476543448832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubern
etes.pod.name: coredns-6f6b679f8f-6hvhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192399ae-50bf-4a56-a6e0-fd7fa3cc2d79,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbc829741236ef3ad398074d29a76cb934a957b4fe956f9a5f53ec1d8af902c,PodSandboxId:0655a8a77192bb76b7872e7d9029eea346b4508a1882fa4ec857b048b92fbae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090476493080825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m7tnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214207fe-b389-47ea-9d6d-003b712e21d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6135f1328c820c68c5cd034b003845151f0ba3e879beef2170cb3b3350812829,PodSandboxId:63abda04ee088b31ebefa6df7dbd90ea8ad7d671bcb12b4b9a15a6f6bcd267b5,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090475982180553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfttw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8bc7583-8aaf-4adf-aa57-ff2c2ab588d3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b062ffb9-d0d9-40ee-9ec2-5e4e87f7096b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.659912030Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a000dd9-e280-44b0-bc88-a18ba032060b name=/runtime.v1.RuntimeService/Version
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.660006881Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a000dd9-e280-44b0-bc88-a18ba032060b name=/runtime.v1.RuntimeService/Version
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.661090444Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28f67616-58e8-463d-ae71-a9735552be3d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.661602894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090524661576663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28f67616-58e8-463d-ae71-a9735552be3d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.662314940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc2a130d-dccc-409a-a1b7-51f1d6c2e67c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.662378452Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc2a130d-dccc-409a-a1b7-51f1d6c2e67c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.662746573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aff04e7c6f755e7deb5788ea173af5634953b5be35d038047f4a2902a209042d,PodSandboxId:509c547451aa8d9854730650c6d429606add9ec9ca199ebae1538eab775f10d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724090521543592311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e010ccc3-6784-4fc4-8ac6-c3601bfe31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a48813ea7c603cf1f2562ce43721dcba88d4e89571cb56da91c9287950ece8,PodSandboxId:82d4d835b148ad72fffa9e791f42009ec593f2d39e0e9224e18b4aa890b93d71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090517741445685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6effab939260143f0cae596c12af7a37,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0889f39e902b3637361264d727e033982fe41e2acb1957611c38f67bcdc5a21,PodSandboxId:1ad926e6b70f00a3409c4103c5cf35735b25f54e58075932451962a87d9dfb62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090517730551699,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2d76e892c5018a58a1515d4cc893eb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549b332148400afafc83ba89d2d516152a0e1400d5890cd03c2cf7c004e738ef,PodSandboxId:d861c89dd398074cfe5609603502ec6444a9bda806b6c4884f85058752d55a01,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090517719438316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ef607ecdc5d5048dbd539e5b7dbe126,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4445a6aadf7c24d19f462636ac91283e94e21059061f66d534a35131df3bd909,PodSandboxId:47be6974c964107f9c47f06c6a58d5cc60635b4c248e69b61dee287a7bfc1d04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090514707404310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc4baed5531069f72344c61a236e079,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ed377616030bc2029c6fb29a08691a58603fd14212232634bb14ffddedb3,PodSandboxId:ff39b90e5d9b1dd7f19c84398e0f3d185685f066b1f0698948fe06cdc25618ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090495738488952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6hvhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192399ae-50bf-4a56-a6e0-fd7fa3cc2d79,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b984c0263c4f2eb936dc659778d80d8d71f4417667291381d2278428fed760c,PodSandboxId:2bbdda436b3ffe83621d5133aa3b3bfd7c02af2554240aaf16714d5fdbfec174,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724090494664885524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfttw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8bc7583-8aaf-4adf
-aa57-ff2c2ab588d3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36e1fe7e5a4957aa704dafce82dfb07ecb341d2daa35fac7291ba806a1ec22e,PodSandboxId:e940f24f8176fab4a916b5a07ee950e650cc09912af7e6f2559e3da43cbf2625,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090495666892561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m7tnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214207fe-b389-47ea-9d6d-003b712e21d7,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b389437911c6712d32111bb3ed6769b145cc5c8c90791c1edeb6b1c2c825db2,PodSandboxId:509c547451aa8d9854730650c6d429606add9ec9ca199ebae1538eab775f10d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724090494587876944,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e010ccc3-6784-4fc4-8ac6-c3601bfe31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539a13def6e0208747e810aa688c7f5c4cc2d04eb71e5177a21b9143cdbb5443,PodSandboxId:d861c89dd398074cfe5609603502ec6444a9bda806b6c4884f85058752d55a01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090494426541055,Labels:map[string]string{io.kubernetes.container
.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ef607ecdc5d5048dbd539e5b7dbe126,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d479f9e70a0c4c076da0e910b0efcdd04a5443104a6e0a696cfbe090f45c2c50,PodSandboxId:47be6974c964107f9c47f06c6a58d5cc60635b4c248e69b61dee287a7bfc1d04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724090494400982146,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc4baed5531069f72344c61a236e079,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3872e7369ccec420a99b1a6e99e0e605c409a69c5069c19f1de96810cc92182e,PodSandboxId:82d4d835b148ad72fffa9e791f42009ec593f2d39e0e9224e18b4aa890b93d71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724090494363409368,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6effab939260143f0cae596c12af7a37,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daf25e5ac8dd8eff20a3072e72d83126c01cb15c814992b8025e9c2bd400d12,PodSandboxId:1ad926e6b70f00a3409c4103c5cf35735b25f54e58075932451962a87d9dfb62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724090494316881680,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2d76e892c5018a58a1515d4cc893eb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444da43334ede1632a0e1fe511c7495fb0d20bc738b1816f089e0799b0689a19,PodSandboxId:128f2ba3964caf00850cf2b666e426fd2e25039d9ce811e6e28e5e6616c494ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090476543448832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubern
etes.pod.name: coredns-6f6b679f8f-6hvhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192399ae-50bf-4a56-a6e0-fd7fa3cc2d79,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbc829741236ef3ad398074d29a76cb934a957b4fe956f9a5f53ec1d8af902c,PodSandboxId:0655a8a77192bb76b7872e7d9029eea346b4508a1882fa4ec857b048b92fbae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090476493080825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m7tnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214207fe-b389-47ea-9d6d-003b712e21d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6135f1328c820c68c5cd034b003845151f0ba3e879beef2170cb3b3350812829,PodSandboxId:63abda04ee088b31ebefa6df7dbd90ea8ad7d671bcb12b4b9a15a6f6bcd267b5,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090475982180553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfttw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8bc7583-8aaf-4adf-aa57-ff2c2ab588d3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc2a130d-dccc-409a-a1b7-51f1d6c2e67c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.701100909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4c105b3-88e6-4048-8639-1617ed8af753 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.701174831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4c105b3-88e6-4048-8639-1617ed8af753 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.702728947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74910f29-89ce-4ba4-83e9-6eccb7540f32 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.703087171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090524703067025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74910f29-89ce-4ba4-83e9-6eccb7540f32 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.703628953Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bff8b56-5245-44e2-bbd9-baeed0dd6c39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.703682344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bff8b56-5245-44e2-bbd9-baeed0dd6c39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:02:04 kubernetes-upgrade-415209 crio[2291]: time="2024-08-19 18:02:04.704022318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aff04e7c6f755e7deb5788ea173af5634953b5be35d038047f4a2902a209042d,PodSandboxId:509c547451aa8d9854730650c6d429606add9ec9ca199ebae1538eab775f10d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724090521543592311,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e010ccc3-6784-4fc4-8ac6-c3601bfe31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a48813ea7c603cf1f2562ce43721dcba88d4e89571cb56da91c9287950ece8,PodSandboxId:82d4d835b148ad72fffa9e791f42009ec593f2d39e0e9224e18b4aa890b93d71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724090517741445685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6effab939260143f0cae596c12af7a37,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0889f39e902b3637361264d727e033982fe41e2acb1957611c38f67bcdc5a21,PodSandboxId:1ad926e6b70f00a3409c4103c5cf35735b25f54e58075932451962a87d9dfb62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724090517730551699,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2d76e892c5018a58a1515d4cc893eb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549b332148400afafc83ba89d2d516152a0e1400d5890cd03c2cf7c004e738ef,PodSandboxId:d861c89dd398074cfe5609603502ec6444a9bda806b6c4884f85058752d55a01,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724090517719438316,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ef607ecdc5d5048dbd539e5b7dbe126,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4445a6aadf7c24d19f462636ac91283e94e21059061f66d534a35131df3bd909,PodSandboxId:47be6974c964107f9c47f06c6a58d5cc60635b4c248e69b61dee287a7bfc1d04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724090514707404310,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc4baed5531069f72344c61a236e079,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7781ed377616030bc2029c6fb29a08691a58603fd14212232634bb14ffddedb3,PodSandboxId:ff39b90e5d9b1dd7f19c84398e0f3d185685f066b1f0698948fe06cdc25618ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090495738488952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6hvhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192399ae-50bf-4a56-a6e0-fd7fa3cc2d79,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b984c0263c4f2eb936dc659778d80d8d71f4417667291381d2278428fed760c,PodSandboxId:2bbdda436b3ffe83621d5133aa3b3bfd7c02af2554240aaf16714d5fdbfec174,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724090494664885524,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfttw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8bc7583-8aaf-4adf
-aa57-ff2c2ab588d3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36e1fe7e5a4957aa704dafce82dfb07ecb341d2daa35fac7291ba806a1ec22e,PodSandboxId:e940f24f8176fab4a916b5a07ee950e650cc09912af7e6f2559e3da43cbf2625,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724090495666892561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m7tnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214207fe-b389-47ea-9d6d-003b712e21d7,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b389437911c6712d32111bb3ed6769b145cc5c8c90791c1edeb6b1c2c825db2,PodSandboxId:509c547451aa8d9854730650c6d429606add9ec9ca199ebae1538eab775f10d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724090494587876944,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e010ccc3-6784-4fc4-8ac6-c3601bfe31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539a13def6e0208747e810aa688c7f5c4cc2d04eb71e5177a21b9143cdbb5443,PodSandboxId:d861c89dd398074cfe5609603502ec6444a9bda806b6c4884f85058752d55a01,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724090494426541055,Labels:map[string]string{io.kubernetes.container
.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ef607ecdc5d5048dbd539e5b7dbe126,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d479f9e70a0c4c076da0e910b0efcdd04a5443104a6e0a696cfbe090f45c2c50,PodSandboxId:47be6974c964107f9c47f06c6a58d5cc60635b4c248e69b61dee287a7bfc1d04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724090494400982146,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc4baed5531069f72344c61a236e079,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3872e7369ccec420a99b1a6e99e0e605c409a69c5069c19f1de96810cc92182e,PodSandboxId:82d4d835b148ad72fffa9e791f42009ec593f2d39e0e9224e18b4aa890b93d71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724090494363409368,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6effab939260143f0cae596c12af7a37,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daf25e5ac8dd8eff20a3072e72d83126c01cb15c814992b8025e9c2bd400d12,PodSandboxId:1ad926e6b70f00a3409c4103c5cf35735b25f54e58075932451962a87d9dfb62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724090494316881680,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-415209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e2d76e892c5018a58a1515d4cc893eb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444da43334ede1632a0e1fe511c7495fb0d20bc738b1816f089e0799b0689a19,PodSandboxId:128f2ba3964caf00850cf2b666e426fd2e25039d9ce811e6e28e5e6616c494ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090476543448832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubern
etes.pod.name: coredns-6f6b679f8f-6hvhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192399ae-50bf-4a56-a6e0-fd7fa3cc2d79,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbc829741236ef3ad398074d29a76cb934a957b4fe956f9a5f53ec1d8af902c,PodSandboxId:0655a8a77192bb76b7872e7d9029eea346b4508a1882fa4ec857b048b92fbae2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724090476493080825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m7tnl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214207fe-b389-47ea-9d6d-003b712e21d7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6135f1328c820c68c5cd034b003845151f0ba3e879beef2170cb3b3350812829,PodSandboxId:63abda04ee088b31ebefa6df7dbd90ea8ad7d671bcb12b4b9a15a6f6bcd267b5,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724090475982180553,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfttw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8bc7583-8aaf-4adf-aa57-ff2c2ab588d3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bff8b56-5245-44e2-bbd9-baeed0dd6c39 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aff04e7c6f755       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   509c547451aa8       storage-provisioner
	a5a48813ea7c6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   7 seconds ago       Running             kube-scheduler            2                   82d4d835b148a       kube-scheduler-kubernetes-upgrade-415209
	c0889f39e902b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   7 seconds ago       Running             kube-apiserver            2                   1ad926e6b70f0       kube-apiserver-kubernetes-upgrade-415209
	549b332148400       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   d861c89dd3980       etcd-kubernetes-upgrade-415209
	4445a6aadf7c2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   10 seconds ago      Running             kube-controller-manager   2                   47be6974c9641       kube-controller-manager-kubernetes-upgrade-415209
	7781ed3776160       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   29 seconds ago      Running             coredns                   1                   ff39b90e5d9b1       coredns-6f6b679f8f-6hvhs
	d36e1fe7e5a49       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   29 seconds ago      Running             coredns                   1                   e940f24f8176f       coredns-6f6b679f8f-m7tnl
	9b984c0263c4f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   30 seconds ago      Running             kube-proxy                1                   2bbdda436b3ff       kube-proxy-gfttw
	6b389437911c6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   30 seconds ago      Exited              storage-provisioner       1                   509c547451aa8       storage-provisioner
	539a13def6e02       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   30 seconds ago      Exited              etcd                      1                   d861c89dd3980       etcd-kubernetes-upgrade-415209
	d479f9e70a0c4       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   30 seconds ago      Exited              kube-controller-manager   1                   47be6974c9641       kube-controller-manager-kubernetes-upgrade-415209
	3872e7369ccec       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   30 seconds ago      Exited              kube-scheduler            1                   82d4d835b148a       kube-scheduler-kubernetes-upgrade-415209
	9daf25e5ac8dd       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   30 seconds ago      Exited              kube-apiserver            1                   1ad926e6b70f0       kube-apiserver-kubernetes-upgrade-415209
	444da43334ede       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   48 seconds ago      Exited              coredns                   0                   128f2ba3964ca       coredns-6f6b679f8f-6hvhs
	1bbc829741236       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   48 seconds ago      Exited              coredns                   0                   0655a8a77192b       coredns-6f6b679f8f-m7tnl
	6135f1328c820       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   48 seconds ago      Exited              kube-proxy                0                   63abda04ee088       kube-proxy-gfttw
	
	
	==> coredns [1bbc829741236ef3ad398074d29a76cb934a957b4fe956f9a5f53ec1d8af902c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [444da43334ede1632a0e1fe511c7495fb0d20bc738b1816f089e0799b0689a19] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7781ed377616030bc2029c6fb29a08691a58603fd14212232634bb14ffddedb3] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1949690326]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:01:36.179) (total time: 10001ms):
	Trace[1949690326]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:01:46.180)
	Trace[1949690326]: [10.001351759s] [10.001351759s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[787105755]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:01:36.178) (total time: 10002ms):
	Trace[787105755]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:01:46.180)
	Trace[787105755]: [10.002194365s] [10.002194365s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[933943252]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:01:36.179) (total time: 10002ms):
	Trace[933943252]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:01:46.181)
	Trace[933943252]: [10.002517856s] [10.002517856s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:53390->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:53390->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:53380->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:53380->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:53384->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:53384->10.96.0.1:443: read: connection reset by peer
	
	
	==> coredns [d36e1fe7e5a4957aa704dafce82dfb07ecb341d2daa35fac7291ba806a1ec22e] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[858727343]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:01:35.993) (total time: 10001ms):
	Trace[858727343]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:01:45.994)
	Trace[858727343]: [10.001726905s] [10.001726905s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[865519713]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:01:35.993) (total time: 10001ms):
	Trace[865519713]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:01:45.994)
	Trace[865519713]: [10.00154832s] [10.00154832s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1265841605]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:01:35.992) (total time: 10003ms):
	Trace[1265841605]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:01:45.994)
	Trace[1265841605]: [10.003001093s] [10.003001093s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41618->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41618->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41592->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41592->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41604->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41604->10.96.0.1:443: read: connection reset by peer
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-415209
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-415209
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:01:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-415209
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:02:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:02:01 +0000   Mon, 19 Aug 2024 18:01:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:02:01 +0000   Mon, 19 Aug 2024 18:01:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:02:01 +0000   Mon, 19 Aug 2024 18:01:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:02:01 +0000   Mon, 19 Aug 2024 18:01:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.81
	  Hostname:    kubernetes-upgrade-415209
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b8fd34082ca4e05b31c199f4e61484a
	  System UUID:                8b8fd340-82ca-4e05-b31c-199f4e61484a
	  Boot ID:                    4e74e96f-ff51-4ad4-a8dd-b00d1f2c4420
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-6hvhs                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     50s
	  kube-system                 coredns-6f6b679f8f-m7tnl                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     50s
	  kube-system                 etcd-kubernetes-upgrade-415209                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         53s
	  kube-system                 kube-apiserver-kubernetes-upgrade-415209             250m (12%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-415209    200m (10%)    0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-proxy-gfttw                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-scheduler-kubernetes-upgrade-415209             100m (5%)     0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 48s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s (x8 over 64s)  kubelet          Node kubernetes-upgrade-415209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 64s)  kubelet          Node kubernetes-upgrade-415209 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x7 over 64s)  kubelet          Node kubernetes-upgrade-415209 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node kubernetes-upgrade-415209 event: Registered Node kubernetes-upgrade-415209 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-415209 event: Registered Node kubernetes-upgrade-415209 in Controller
	
	
	==> dmesg <==
	[  +1.525287] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.992482] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.061265] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060753] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.203745] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.159892] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.303143] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +4.093560] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[Aug19 18:01] systemd-fstab-generator[860]: Ignoring "noauto" option for root device
	[  +0.062699] kauditd_printk_skb: 158 callbacks suppressed
	[ +11.771109] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	[  +0.088423] kauditd_printk_skb: 69 callbacks suppressed
	[ +18.480551] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[  +0.080156] kauditd_printk_skb: 109 callbacks suppressed
	[  +0.050276] systemd-fstab-generator[2222]: Ignoring "noauto" option for root device
	[  +0.183249] systemd-fstab-generator[2236]: Ignoring "noauto" option for root device
	[  +0.150746] systemd-fstab-generator[2248]: Ignoring "noauto" option for root device
	[  +0.311148] systemd-fstab-generator[2276]: Ignoring "noauto" option for root device
	[  +0.916652] systemd-fstab-generator[2429]: Ignoring "noauto" option for root device
	[ +12.664983] kauditd_printk_skb: 230 callbacks suppressed
	[ +10.820213] systemd-fstab-generator[3446]: Ignoring "noauto" option for root device
	[Aug19 18:02] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.556119] systemd-fstab-generator[3796]: Ignoring "noauto" option for root device
	
	
	==> etcd [539a13def6e0208747e810aa688c7f5c4cc2d04eb71e5177a21b9143cdbb5443] <==
	{"level":"warn","ts":"2024-08-19T18:01:35.426966Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-08-19T18:01:35.427078Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.81:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.81:2380","--initial-cluster=kubernetes-upgrade-415209=https://192.168.39.81:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.81:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.81:2380","--name=kubernetes-upgrade-415209","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot
-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-08-19T18:01:35.427260Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-08-19T18:01:35.427285Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-08-19T18:01:35.427314Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.81:2380"]}
	{"level":"info","ts":"2024-08-19T18:01:35.427534Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T18:01:35.428929Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.81:2379"]}
	{"level":"info","ts":"2024-08-19T18:01:35.429264Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-415209","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.81:2380"],"listen-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.81:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","i
nitial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-08-19T18:01:35.516536Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"77.515628ms"}
	{"level":"info","ts":"2024-08-19T18:01:35.579096Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-19T18:01:35.591969Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","commit-index":389}
	{"level":"info","ts":"2024-08-19T18:01:35.592153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-19T18:01:35.592233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became follower at term 2"}
	{"level":"info","ts":"2024-08-19T18:01:35.613933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 81f5d9acb096f107 [peers: [], term: 2, commit: 389, applied: 0, lastindex: 389, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-19T18:01:35.618935Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	
	
	==> etcd [549b332148400afafc83ba89d2d516152a0e1400d5890cd03c2cf7c004e738ef] <==
	{"level":"info","ts":"2024-08-19T18:01:57.995368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 switched to configuration voters=(9364630335907098887)"}
	{"level":"info","ts":"2024-08-19T18:01:57.995511Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","added-peer-id":"81f5d9acb096f107","added-peer-peer-urls":["https://192.168.39.81:2380"]}
	{"level":"info","ts":"2024-08-19T18:01:57.995560Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T18:01:57.995783Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:01:57.995846Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"81f5d9acb096f107","initial-advertise-peer-urls":["https://192.168.39.81:2380"],"listen-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.81:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T18:01:57.995880Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T18:01:57.995860Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:01:57.996043Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-08-19T18:01:57.996062Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-08-19T18:01:59.566691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-19T18:01:59.566827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-19T18:01:59.566884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgPreVoteResp from 81f5d9acb096f107 at term 2"}
	{"level":"info","ts":"2024-08-19T18:01:59.566930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became candidate at term 3"}
	{"level":"info","ts":"2024-08-19T18:01:59.566956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgVoteResp from 81f5d9acb096f107 at term 3"}
	{"level":"info","ts":"2024-08-19T18:01:59.566992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became leader at term 3"}
	{"level":"info","ts":"2024-08-19T18:01:59.567019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81f5d9acb096f107 elected leader 81f5d9acb096f107 at term 3"}
	{"level":"info","ts":"2024-08-19T18:01:59.573996Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T18:01:59.575028Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:01:59.575831Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.81:2379"}
	{"level":"info","ts":"2024-08-19T18:01:59.576143Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T18:01:59.573952Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"81f5d9acb096f107","local-member-attributes":"{Name:kubernetes-upgrade-415209 ClientURLs:[https://192.168.39.81:2379]}","request-path":"/0/members/81f5d9acb096f107/attributes","cluster-id":"a77bf2d9a9fbb59e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T18:01:59.576847Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:01:59.577626Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T18:01:59.577678Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T18:01:59.580274Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:02:05 up 1 min,  0 users,  load average: 0.87, 0.32, 0.12
	Linux kubernetes-upgrade-415209 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9daf25e5ac8dd8eff20a3072e72d83126c01cb15c814992b8025e9c2bd400d12] <==
	I0819 18:01:34.940233       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0819 18:01:35.804531       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:35.804788       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0819 18:01:35.805410       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0819 18:01:35.817077       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 18:01:35.823991       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0819 18:01:35.825237       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0819 18:01:35.825500       1 instance.go:232] Using reconciler: lease
	W0819 18:01:35.826384       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:36.805157       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:36.805645       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:36.826967       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:38.107065       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:38.164847       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:38.300274       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:40.337892       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:41.152446       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:41.339009       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:44.181948       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:44.763723       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:44.845501       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:50.405143       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:51.330989       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:01:51.731832       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0819 18:01:55.826542       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [c0889f39e902b3637361264d727e033982fe41e2acb1957611c38f67bcdc5a21] <==
	I0819 18:02:01.030404       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 18:02:01.037453       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 18:02:01.037660       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 18:02:01.037681       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 18:02:01.038528       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 18:02:01.040300       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 18:02:01.040378       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 18:02:01.040454       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 18:02:01.041361       1 aggregator.go:171] initial CRD sync complete...
	I0819 18:02:01.043252       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 18:02:01.043335       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 18:02:01.043846       1 cache.go:39] Caches are synced for autoregister controller
	E0819 18:02:01.056021       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0819 18:02:01.058917       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 18:02:01.074498       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 18:02:01.074535       1 policy_source.go:224] refreshing policies
	I0819 18:02:01.089493       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 18:02:01.637643       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 18:02:01.836775       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 18:02:02.523827       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0819 18:02:02.537509       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0819 18:02:02.593635       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0819 18:02:02.695116       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 18:02:02.704900       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 18:02:04.547579       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4445a6aadf7c24d19f462636ac91283e94e21059061f66d534a35131df3bd909] <==
	I0819 18:02:04.193832       1 shared_informer.go:320] Caches are synced for GC
	I0819 18:02:04.197030       1 shared_informer.go:320] Caches are synced for expand
	I0819 18:02:04.200841       1 shared_informer.go:320] Caches are synced for attach detach
	I0819 18:02:04.204013       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0819 18:02:04.209323       1 shared_informer.go:320] Caches are synced for job
	I0819 18:02:04.230625       1 shared_informer.go:320] Caches are synced for daemon sets
	I0819 18:02:04.247093       1 shared_informer.go:320] Caches are synced for node
	I0819 18:02:04.248328       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0819 18:02:04.248389       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0819 18:02:04.248409       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0819 18:02:04.248419       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0819 18:02:04.248572       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-415209"
	I0819 18:02:04.250098       1 shared_informer.go:320] Caches are synced for taint
	I0819 18:02:04.250236       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0819 18:02:04.250312       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-415209"
	I0819 18:02:04.250359       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0819 18:02:04.310450       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="176.172103ms"
	I0819 18:02:04.311003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="79.845µs"
	I0819 18:02:04.341728       1 shared_informer.go:320] Caches are synced for cronjob
	I0819 18:02:04.356361       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 18:02:04.392018       1 shared_informer.go:320] Caches are synced for HPA
	I0819 18:02:04.400626       1 shared_informer.go:320] Caches are synced for resource quota
	I0819 18:02:04.845417       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 18:02:04.864631       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 18:02:04.864692       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [d479f9e70a0c4c076da0e910b0efcdd04a5443104a6e0a696cfbe090f45c2c50] <==
	
	
	==> kube-proxy [6135f1328c820c68c5cd034b003845151f0ba3e879beef2170cb3b3350812829] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:01:16.312514       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:01:16.325525       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.81"]
	E0819 18:01:16.325620       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:01:16.385638       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:01:16.385678       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:01:16.385709       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:01:16.398091       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:01:16.399414       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:01:16.399441       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:01:16.401328       1 config.go:197] "Starting service config controller"
	I0819 18:01:16.401352       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:01:16.401432       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:01:16.401437       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:01:16.401835       1 config.go:326] "Starting node config controller"
	I0819 18:01:16.401842       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:01:16.501563       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:01:16.501615       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:01:16.501932       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9b984c0263c4f2eb936dc659778d80d8d71f4417667291381d2278428fed760c] <==
	 >
	E0819 18:01:36.239453       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:01:46.242846       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-415209\": net/http: TLS handshake timeout"
	E0819 18:01:56.834877       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-415209\": dial tcp 192.168.39.81:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.81:37040->192.168.39.81:8443: read: connection reset by peer"
	I0819 18:02:01.014910       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.81"]
	E0819 18:02:01.015028       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:02:01.071848       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:02:01.071914       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:02:01.071942       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:02:01.074390       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:02:01.075023       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:02:01.075099       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:02:01.076684       1 config.go:197] "Starting service config controller"
	I0819 18:02:01.076760       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:02:01.076799       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:02:01.076814       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:02:01.077355       1 config.go:326] "Starting node config controller"
	I0819 18:02:01.077394       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:02:01.177262       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:02:01.177277       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:02:01.177514       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3872e7369ccec420a99b1a6e99e0e605c409a69c5069c19f1de96810cc92182e] <==
	I0819 18:01:36.523271       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [a5a48813ea7c603cf1f2562ce43721dcba88d4e89571cb56da91c9287950ece8] <==
	I0819 18:01:58.577274       1 serving.go:386] Generated self-signed cert in-memory
	W0819 18:02:00.943336       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 18:02:00.943383       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 18:02:00.943395       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 18:02:00.943406       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 18:02:00.996373       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 18:02:00.996416       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:02:01.003841       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 18:02:01.004054       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 18:02:01.004098       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 18:02:01.004124       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 18:02:01.104768       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.493023    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fc4baed5531069f72344c61a236e079-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-415209\" (UID: \"9fc4baed5531069f72344c61a236e079\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-415209"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.493043    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6effab939260143f0cae596c12af7a37-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-415209\" (UID: \"6effab939260143f0cae596c12af7a37\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-415209"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.493112    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9ef607ecdc5d5048dbd539e5b7dbe126-etcd-certs\") pod \"etcd-kubernetes-upgrade-415209\" (UID: \"9ef607ecdc5d5048dbd539e5b7dbe126\") " pod="kube-system/etcd-kubernetes-upgrade-415209"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.493125    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9ef607ecdc5d5048dbd539e5b7dbe126-etcd-data\") pod \"etcd-kubernetes-upgrade-415209\" (UID: \"9ef607ecdc5d5048dbd539e5b7dbe126\") " pod="kube-system/etcd-kubernetes-upgrade-415209"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.493255    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e2d76e892c5018a58a1515d4cc893eb-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-415209\" (UID: \"5e2d76e892c5018a58a1515d4cc893eb\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-415209"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.493362    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e2d76e892c5018a58a1515d4cc893eb-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-415209\" (UID: \"5e2d76e892c5018a58a1515d4cc893eb\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-415209"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.493441    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fc4baed5531069f72344c61a236e079-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-415209\" (UID: \"9fc4baed5531069f72344c61a236e079\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-415209"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.493520    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fc4baed5531069f72344c61a236e079-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-415209\" (UID: \"9fc4baed5531069f72344c61a236e079\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-415209"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.635396    3453 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-415209"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: E0819 18:01:57.636275    3453 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.81:8443: connect: connection refused" node="kubernetes-upgrade-415209"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.708447    3453 scope.go:117] "RemoveContainer" containerID="539a13def6e0208747e810aa688c7f5c4cc2d04eb71e5177a21b9143cdbb5443"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.710007    3453 scope.go:117] "RemoveContainer" containerID="9daf25e5ac8dd8eff20a3072e72d83126c01cb15c814992b8025e9c2bd400d12"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:57.714996    3453 scope.go:117] "RemoveContainer" containerID="3872e7369ccec420a99b1a6e99e0e605c409a69c5069c19f1de96810cc92182e"
	Aug 19 18:01:57 kubernetes-upgrade-415209 kubelet[3453]: E0819 18:01:57.846942    3453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-415209?timeout=10s\": dial tcp 192.168.39.81:8443: connect: connection refused" interval="800ms"
	Aug 19 18:01:58 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:01:58.037970    3453 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-415209"
	Aug 19 18:02:01 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:02:01.120766    3453 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-415209"
	Aug 19 18:02:01 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:02:01.121247    3453 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-415209"
	Aug 19 18:02:01 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:02:01.121328    3453 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 18:02:01 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:02:01.122525    3453 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 18:02:01 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:02:01.223063    3453 apiserver.go:52] "Watching apiserver"
	Aug 19 18:02:01 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:02:01.240374    3453 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 19 18:02:01 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:02:01.290288    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8bc7583-8aaf-4adf-aa57-ff2c2ab588d3-xtables-lock\") pod \"kube-proxy-gfttw\" (UID: \"c8bc7583-8aaf-4adf-aa57-ff2c2ab588d3\") " pod="kube-system/kube-proxy-gfttw"
	Aug 19 18:02:01 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:02:01.290855    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e010ccc3-6784-4fc4-8ac6-c3601bfe31a3-tmp\") pod \"storage-provisioner\" (UID: \"e010ccc3-6784-4fc4-8ac6-c3601bfe31a3\") " pod="kube-system/storage-provisioner"
	Aug 19 18:02:01 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:02:01.291120    3453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8bc7583-8aaf-4adf-aa57-ff2c2ab588d3-lib-modules\") pod \"kube-proxy-gfttw\" (UID: \"c8bc7583-8aaf-4adf-aa57-ff2c2ab588d3\") " pod="kube-system/kube-proxy-gfttw"
	Aug 19 18:02:01 kubernetes-upgrade-415209 kubelet[3453]: I0819 18:02:01.530656    3453 scope.go:117] "RemoveContainer" containerID="6b389437911c6712d32111bb3ed6769b145cc5c8c90791c1edeb6b1c2c825db2"
	
	
	==> storage-provisioner [6b389437911c6712d32111bb3ed6769b145cc5c8c90791c1edeb6b1c2c825db2] <==
	I0819 18:01:36.027074       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 18:01:46.040645       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: TLS handshake timeout
	
	
	==> storage-provisioner [aff04e7c6f755e7deb5788ea173af5634953b5be35d038047f4a2902a209042d] <==
	I0819 18:02:01.623036       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:02:01.630715       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:02:01.630879       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:02:01.643523       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:02:01.643730       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-415209_b8e8d6e6-4bf9-42b2-8be1-54f0e8fcb92d!
	I0819 18:02:01.644682       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"26746fd0-7b5e-4ad8-a872-1b0e6c8c45ad", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-415209_b8e8d6e6-4bf9-42b2-8be1-54f0e8fcb92d became leader
	I0819 18:02:01.744813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-415209_b8e8d6e6-4bf9-42b2-8be1-54f0e8fcb92d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-415209 -n kubernetes-upgrade-415209
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-415209 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-415209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-415209
--- FAIL: TestKubernetesUpgrade (379.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (272.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-079123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-079123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m32.443394239s)

                                                
                                                
-- stdout --
	* [old-k8s-version-079123] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-079123" primary control-plane node in "old-k8s-version-079123" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:02:02.810247   59547 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:02:02.810366   59547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:02:02.810376   59547 out.go:358] Setting ErrFile to fd 2...
	I0819 18:02:02.810382   59547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:02:02.810697   59547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:02:02.811347   59547 out.go:352] Setting JSON to false
	I0819 18:02:02.812502   59547 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6268,"bootTime":1724084255,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:02:02.812585   59547 start.go:139] virtualization: kvm guest
	I0819 18:02:02.814789   59547 out.go:177] * [old-k8s-version-079123] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:02:02.816120   59547 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:02:02.816181   59547 notify.go:220] Checking for updates...
	I0819 18:02:02.818843   59547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:02:02.820246   59547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:02:02.821654   59547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:02:02.822956   59547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:02:02.824159   59547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:02:02.825932   59547 config.go:182] Loaded profile config "cert-expiration-975771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:02.826072   59547 config.go:182] Loaded profile config "cert-options-948260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:02.826187   59547 config.go:182] Loaded profile config "kubernetes-upgrade-415209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:02:02.826295   59547 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:02:02.865548   59547 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:02:02.866879   59547 start.go:297] selected driver: kvm2
	I0819 18:02:02.866899   59547 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:02:02.866911   59547 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:02:02.867596   59547 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:02:02.867671   59547 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:02:02.884258   59547 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:02:02.884309   59547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:02:02.884518   59547 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:02:02.884553   59547 cni.go:84] Creating CNI manager for ""
	I0819 18:02:02.884560   59547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:02:02.884567   59547 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 18:02:02.884615   59547 start.go:340] cluster config:
	{Name:old-k8s-version-079123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-079123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:02:02.884722   59547 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:02:02.886644   59547 out.go:177] * Starting "old-k8s-version-079123" primary control-plane node in "old-k8s-version-079123" cluster
	I0819 18:02:02.887754   59547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 18:02:02.887783   59547 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:02:02.887792   59547 cache.go:56] Caching tarball of preloaded images
	I0819 18:02:02.887872   59547 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:02:02.887881   59547 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 18:02:02.887954   59547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/config.json ...
	I0819 18:02:02.887971   59547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/config.json: {Name:mkb4d02fe4f6ea7cf927864a20e87bdfa352ee49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:02.888088   59547 start.go:360] acquireMachinesLock for old-k8s-version-079123: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:02:08.073492   59547 start.go:364] duration metric: took 5.185373242s to acquireMachinesLock for "old-k8s-version-079123"
	I0819 18:02:08.073541   59547 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-079123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-079123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:02:08.073696   59547 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 18:02:08.075759   59547 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 18:02:08.075948   59547 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:02:08.076001   59547 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:02:08.097175   59547 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0819 18:02:08.097641   59547 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:02:08.098354   59547 main.go:141] libmachine: Using API Version  1
	I0819 18:02:08.098378   59547 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:02:08.098832   59547 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:02:08.099039   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetMachineName
	I0819 18:02:08.099206   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:02:08.099391   59547 start.go:159] libmachine.API.Create for "old-k8s-version-079123" (driver="kvm2")
	I0819 18:02:08.099426   59547 client.go:168] LocalClient.Create starting
	I0819 18:02:08.099471   59547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 18:02:08.099515   59547 main.go:141] libmachine: Decoding PEM data...
	I0819 18:02:08.099538   59547 main.go:141] libmachine: Parsing certificate...
	I0819 18:02:08.099617   59547 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 18:02:08.099646   59547 main.go:141] libmachine: Decoding PEM data...
	I0819 18:02:08.099676   59547 main.go:141] libmachine: Parsing certificate...
	I0819 18:02:08.099707   59547 main.go:141] libmachine: Running pre-create checks...
	I0819 18:02:08.099727   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .PreCreateCheck
	I0819 18:02:08.100088   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetConfigRaw
	I0819 18:02:08.100537   59547 main.go:141] libmachine: Creating machine...
	I0819 18:02:08.100557   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .Create
	I0819 18:02:08.100693   59547 main.go:141] libmachine: (old-k8s-version-079123) Creating KVM machine...
	I0819 18:02:08.102177   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found existing default KVM network
	I0819 18:02:08.103678   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:08.103503   59854 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002155a0}
	I0819 18:02:08.103709   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | created network xml: 
	I0819 18:02:08.103722   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | <network>
	I0819 18:02:08.103751   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG |   <name>mk-old-k8s-version-079123</name>
	I0819 18:02:08.103772   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG |   <dns enable='no'/>
	I0819 18:02:08.103783   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG |   
	I0819 18:02:08.103793   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 18:02:08.103804   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG |     <dhcp>
	I0819 18:02:08.103817   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 18:02:08.103828   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG |     </dhcp>
	I0819 18:02:08.103853   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG |   </ip>
	I0819 18:02:08.103863   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG |   
	I0819 18:02:08.103872   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | </network>
	I0819 18:02:08.103885   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | 
	I0819 18:02:08.109811   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | trying to create private KVM network mk-old-k8s-version-079123 192.168.39.0/24...
	I0819 18:02:08.180917   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | private KVM network mk-old-k8s-version-079123 192.168.39.0/24 created
	I0819 18:02:08.180954   59547 main.go:141] libmachine: (old-k8s-version-079123) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123 ...
	I0819 18:02:08.180968   59547 main.go:141] libmachine: (old-k8s-version-079123) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:02:08.181029   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:08.180925   59854 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:02:08.181095   59547 main.go:141] libmachine: (old-k8s-version-079123) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:02:08.435690   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:08.435567   59854 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa...
	I0819 18:02:08.849836   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:08.849736   59854 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/old-k8s-version-079123.rawdisk...
	I0819 18:02:08.849876   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Writing magic tar header
	I0819 18:02:08.849889   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Writing SSH key tar header
	I0819 18:02:08.849898   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:08.849852   59854 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123 ...
	I0819 18:02:08.849988   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123
	I0819 18:02:08.850011   59547 main.go:141] libmachine: (old-k8s-version-079123) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123 (perms=drwx------)
	I0819 18:02:08.850032   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 18:02:08.850048   59547 main.go:141] libmachine: (old-k8s-version-079123) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:02:08.850068   59547 main.go:141] libmachine: (old-k8s-version-079123) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 18:02:08.850082   59547 main.go:141] libmachine: (old-k8s-version-079123) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 18:02:08.850092   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:02:08.850101   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 18:02:08.850110   59547 main.go:141] libmachine: (old-k8s-version-079123) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:02:08.850118   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:02:08.850130   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:02:08.850147   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Checking permissions on dir: /home
	I0819 18:02:08.850161   59547 main.go:141] libmachine: (old-k8s-version-079123) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:02:08.850180   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Skipping /home - not owner
	I0819 18:02:08.850195   59547 main.go:141] libmachine: (old-k8s-version-079123) Creating domain...
	I0819 18:02:08.851378   59547 main.go:141] libmachine: (old-k8s-version-079123) define libvirt domain using xml: 
	I0819 18:02:08.851410   59547 main.go:141] libmachine: (old-k8s-version-079123) <domain type='kvm'>
	I0819 18:02:08.851427   59547 main.go:141] libmachine: (old-k8s-version-079123)   <name>old-k8s-version-079123</name>
	I0819 18:02:08.851438   59547 main.go:141] libmachine: (old-k8s-version-079123)   <memory unit='MiB'>2200</memory>
	I0819 18:02:08.851448   59547 main.go:141] libmachine: (old-k8s-version-079123)   <vcpu>2</vcpu>
	I0819 18:02:08.851459   59547 main.go:141] libmachine: (old-k8s-version-079123)   <features>
	I0819 18:02:08.851471   59547 main.go:141] libmachine: (old-k8s-version-079123)     <acpi/>
	I0819 18:02:08.851479   59547 main.go:141] libmachine: (old-k8s-version-079123)     <apic/>
	I0819 18:02:08.851489   59547 main.go:141] libmachine: (old-k8s-version-079123)     <pae/>
	I0819 18:02:08.851508   59547 main.go:141] libmachine: (old-k8s-version-079123)     
	I0819 18:02:08.851521   59547 main.go:141] libmachine: (old-k8s-version-079123)   </features>
	I0819 18:02:08.851545   59547 main.go:141] libmachine: (old-k8s-version-079123)   <cpu mode='host-passthrough'>
	I0819 18:02:08.851552   59547 main.go:141] libmachine: (old-k8s-version-079123)   
	I0819 18:02:08.851564   59547 main.go:141] libmachine: (old-k8s-version-079123)   </cpu>
	I0819 18:02:08.851571   59547 main.go:141] libmachine: (old-k8s-version-079123)   <os>
	I0819 18:02:08.851579   59547 main.go:141] libmachine: (old-k8s-version-079123)     <type>hvm</type>
	I0819 18:02:08.851595   59547 main.go:141] libmachine: (old-k8s-version-079123)     <boot dev='cdrom'/>
	I0819 18:02:08.851607   59547 main.go:141] libmachine: (old-k8s-version-079123)     <boot dev='hd'/>
	I0819 18:02:08.851618   59547 main.go:141] libmachine: (old-k8s-version-079123)     <bootmenu enable='no'/>
	I0819 18:02:08.851628   59547 main.go:141] libmachine: (old-k8s-version-079123)   </os>
	I0819 18:02:08.851635   59547 main.go:141] libmachine: (old-k8s-version-079123)   <devices>
	I0819 18:02:08.851647   59547 main.go:141] libmachine: (old-k8s-version-079123)     <disk type='file' device='cdrom'>
	I0819 18:02:08.851659   59547 main.go:141] libmachine: (old-k8s-version-079123)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/boot2docker.iso'/>
	I0819 18:02:08.851690   59547 main.go:141] libmachine: (old-k8s-version-079123)       <target dev='hdc' bus='scsi'/>
	I0819 18:02:08.851703   59547 main.go:141] libmachine: (old-k8s-version-079123)       <readonly/>
	I0819 18:02:08.851713   59547 main.go:141] libmachine: (old-k8s-version-079123)     </disk>
	I0819 18:02:08.851724   59547 main.go:141] libmachine: (old-k8s-version-079123)     <disk type='file' device='disk'>
	I0819 18:02:08.851736   59547 main.go:141] libmachine: (old-k8s-version-079123)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:02:08.851754   59547 main.go:141] libmachine: (old-k8s-version-079123)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/old-k8s-version-079123.rawdisk'/>
	I0819 18:02:08.851768   59547 main.go:141] libmachine: (old-k8s-version-079123)       <target dev='hda' bus='virtio'/>
	I0819 18:02:08.851780   59547 main.go:141] libmachine: (old-k8s-version-079123)     </disk>
	I0819 18:02:08.851801   59547 main.go:141] libmachine: (old-k8s-version-079123)     <interface type='network'>
	I0819 18:02:08.851815   59547 main.go:141] libmachine: (old-k8s-version-079123)       <source network='mk-old-k8s-version-079123'/>
	I0819 18:02:08.851825   59547 main.go:141] libmachine: (old-k8s-version-079123)       <model type='virtio'/>
	I0819 18:02:08.851848   59547 main.go:141] libmachine: (old-k8s-version-079123)     </interface>
	I0819 18:02:08.851868   59547 main.go:141] libmachine: (old-k8s-version-079123)     <interface type='network'>
	I0819 18:02:08.851888   59547 main.go:141] libmachine: (old-k8s-version-079123)       <source network='default'/>
	I0819 18:02:08.851896   59547 main.go:141] libmachine: (old-k8s-version-079123)       <model type='virtio'/>
	I0819 18:02:08.851902   59547 main.go:141] libmachine: (old-k8s-version-079123)     </interface>
	I0819 18:02:08.851909   59547 main.go:141] libmachine: (old-k8s-version-079123)     <serial type='pty'>
	I0819 18:02:08.851915   59547 main.go:141] libmachine: (old-k8s-version-079123)       <target port='0'/>
	I0819 18:02:08.851922   59547 main.go:141] libmachine: (old-k8s-version-079123)     </serial>
	I0819 18:02:08.851927   59547 main.go:141] libmachine: (old-k8s-version-079123)     <console type='pty'>
	I0819 18:02:08.851937   59547 main.go:141] libmachine: (old-k8s-version-079123)       <target type='serial' port='0'/>
	I0819 18:02:08.851946   59547 main.go:141] libmachine: (old-k8s-version-079123)     </console>
	I0819 18:02:08.851956   59547 main.go:141] libmachine: (old-k8s-version-079123)     <rng model='virtio'>
	I0819 18:02:08.851985   59547 main.go:141] libmachine: (old-k8s-version-079123)       <backend model='random'>/dev/random</backend>
	I0819 18:02:08.852009   59547 main.go:141] libmachine: (old-k8s-version-079123)     </rng>
	I0819 18:02:08.852020   59547 main.go:141] libmachine: (old-k8s-version-079123)     
	I0819 18:02:08.852030   59547 main.go:141] libmachine: (old-k8s-version-079123)     
	I0819 18:02:08.852039   59547 main.go:141] libmachine: (old-k8s-version-079123)   </devices>
	I0819 18:02:08.852050   59547 main.go:141] libmachine: (old-k8s-version-079123) </domain>
	I0819 18:02:08.852062   59547 main.go:141] libmachine: (old-k8s-version-079123) 
	I0819 18:02:08.856270   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:ed:4f:b9 in network default
	I0819 18:02:08.856879   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:08.856901   59547 main.go:141] libmachine: (old-k8s-version-079123) Ensuring networks are active...
	I0819 18:02:08.857715   59547 main.go:141] libmachine: (old-k8s-version-079123) Ensuring network default is active
	I0819 18:02:08.858119   59547 main.go:141] libmachine: (old-k8s-version-079123) Ensuring network mk-old-k8s-version-079123 is active
	I0819 18:02:08.858713   59547 main.go:141] libmachine: (old-k8s-version-079123) Getting domain xml...
	I0819 18:02:08.859474   59547 main.go:141] libmachine: (old-k8s-version-079123) Creating domain...
	I0819 18:02:10.539557   59547 main.go:141] libmachine: (old-k8s-version-079123) Waiting to get IP...
	I0819 18:02:10.541001   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:10.541539   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:10.541570   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:10.541462   59854 retry.go:31] will retry after 276.165203ms: waiting for machine to come up
	I0819 18:02:10.819998   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:10.820766   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:10.820790   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:10.820704   59854 retry.go:31] will retry after 363.197265ms: waiting for machine to come up
	I0819 18:02:11.185194   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:11.185696   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:11.185772   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:11.185654   59854 retry.go:31] will retry after 353.313816ms: waiting for machine to come up
	I0819 18:02:11.540297   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:11.540856   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:11.540881   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:11.540816   59854 retry.go:31] will retry after 595.426104ms: waiting for machine to come up
	I0819 18:02:12.137531   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:12.138016   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:12.138039   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:12.137966   59854 retry.go:31] will retry after 493.024165ms: waiting for machine to come up
	I0819 18:02:12.632774   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:12.633234   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:12.633272   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:12.633149   59854 retry.go:31] will retry after 842.94456ms: waiting for machine to come up
	I0819 18:02:13.477879   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:13.478403   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:13.478440   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:13.478348   59854 retry.go:31] will retry after 784.068549ms: waiting for machine to come up
	I0819 18:02:14.264337   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:14.264881   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:14.264910   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:14.264850   59854 retry.go:31] will retry after 1.46689666s: waiting for machine to come up
	I0819 18:02:15.733617   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:15.734086   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:15.734108   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:15.734039   59854 retry.go:31] will retry after 1.576604882s: waiting for machine to come up
	I0819 18:02:17.312013   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:17.312477   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:17.312507   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:17.312425   59854 retry.go:31] will retry after 2.191806304s: waiting for machine to come up
	I0819 18:02:19.505983   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:19.506446   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:19.506477   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:19.506404   59854 retry.go:31] will retry after 1.810754595s: waiting for machine to come up
	I0819 18:02:21.319440   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:21.319978   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:21.320004   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:21.319931   59854 retry.go:31] will retry after 3.626766194s: waiting for machine to come up
	I0819 18:02:24.948985   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:24.949442   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:02:24.949470   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:02:24.949375   59854 retry.go:31] will retry after 4.4504971s: waiting for machine to come up
	I0819 18:02:29.401323   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.402243   59547 main.go:141] libmachine: (old-k8s-version-079123) Found IP for machine: 192.168.39.246
	I0819 18:02:29.402271   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has current primary IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.402279   59547 main.go:141] libmachine: (old-k8s-version-079123) Reserving static IP address...
	I0819 18:02:29.402771   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-079123", mac: "52:54:00:97:ce:99", ip: "192.168.39.246"} in network mk-old-k8s-version-079123
	I0819 18:02:29.477541   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Getting to WaitForSSH function...
	I0819 18:02:29.477582   59547 main.go:141] libmachine: (old-k8s-version-079123) Reserved static IP address: 192.168.39.246
	I0819 18:02:29.477600   59547 main.go:141] libmachine: (old-k8s-version-079123) Waiting for SSH to be available...
	I0819 18:02:29.480250   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.480683   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:29.480714   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.480903   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Using SSH client type: external
	I0819 18:02:29.480939   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa (-rw-------)
	I0819 18:02:29.480976   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:02:29.480996   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | About to run SSH command:
	I0819 18:02:29.481012   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | exit 0
	I0819 18:02:29.604941   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | SSH cmd err, output: <nil>: 
	I0819 18:02:29.605166   59547 main.go:141] libmachine: (old-k8s-version-079123) KVM machine creation complete!
	I0819 18:02:29.605579   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetConfigRaw
	I0819 18:02:29.606124   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:02:29.606323   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:02:29.606474   59547 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:02:29.606492   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetState
	I0819 18:02:29.607680   59547 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:02:29.607695   59547 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:02:29.607702   59547 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:02:29.607719   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:29.609920   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.610291   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:29.610315   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.610491   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:02:29.610656   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:29.610823   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:29.610948   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:02:29.611081   59547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.611272   59547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:02:29.611285   59547 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:02:29.711900   59547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:02:29.711937   59547 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:02:29.711954   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:29.715178   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.715606   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:29.715636   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.715989   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:02:29.716226   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:29.716370   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:29.716528   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:02:29.716687   59547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.716875   59547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:02:29.716885   59547 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:02:29.821193   59547 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:02:29.821282   59547 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:02:29.821298   59547 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:02:29.821309   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetMachineName
	I0819 18:02:29.821571   59547 buildroot.go:166] provisioning hostname "old-k8s-version-079123"
	I0819 18:02:29.821596   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetMachineName
	I0819 18:02:29.821770   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:29.824425   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.824816   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:29.824848   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.824962   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:02:29.825172   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:29.825318   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:29.825457   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:02:29.825577   59547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.825747   59547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:02:29.825759   59547 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-079123 && echo "old-k8s-version-079123" | sudo tee /etc/hostname
	I0819 18:02:29.942251   59547 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-079123
	
	I0819 18:02:29.942291   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:29.945086   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.945473   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:29.945498   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:29.945739   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:02:29.945966   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:29.946135   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:29.946307   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:02:29.946458   59547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:29.946647   59547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:02:29.946669   59547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-079123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-079123/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-079123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:02:30.057264   59547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:02:30.057292   59547 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 18:02:30.057309   59547 buildroot.go:174] setting up certificates
	I0819 18:02:30.057318   59547 provision.go:84] configureAuth start
	I0819 18:02:30.057327   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetMachineName
	I0819 18:02:30.057620   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetIP
	I0819 18:02:30.060228   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.060707   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.060734   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.060886   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:30.063708   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.064067   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.064113   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.064264   59547 provision.go:143] copyHostCerts
	I0819 18:02:30.064313   59547 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 18:02:30.064331   59547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 18:02:30.064397   59547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 18:02:30.064525   59547 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 18:02:30.064537   59547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 18:02:30.064569   59547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 18:02:30.064671   59547 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 18:02:30.064680   59547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 18:02:30.064700   59547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 18:02:30.064824   59547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-079123 san=[127.0.0.1 192.168.39.246 localhost minikube old-k8s-version-079123]
	I0819 18:02:30.157307   59547 provision.go:177] copyRemoteCerts
	I0819 18:02:30.157365   59547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:02:30.157388   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:30.159917   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.160277   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.160303   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.160478   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:02:30.160667   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:30.160854   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:02:30.160986   59547 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa Username:docker}
	I0819 18:02:30.246549   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:02:30.268937   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 18:02:30.290603   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:02:30.311956   59547 provision.go:87] duration metric: took 254.614314ms to configureAuth
	I0819 18:02:30.311978   59547 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:02:30.312133   59547 config.go:182] Loaded profile config "old-k8s-version-079123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 18:02:30.312225   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:30.315022   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.315428   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.315458   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.315655   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:02:30.315833   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:30.315998   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:30.316123   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:02:30.316288   59547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:30.316499   59547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:02:30.316515   59547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:02:30.565494   59547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:02:30.565545   59547 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:02:30.565558   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetURL
	I0819 18:02:30.566992   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | Using libvirt version 6000000
	I0819 18:02:30.569512   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.569792   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.569819   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.570050   59547 main.go:141] libmachine: Docker is up and running!
	I0819 18:02:30.570068   59547 main.go:141] libmachine: Reticulating splines...
	I0819 18:02:30.570075   59547 client.go:171] duration metric: took 22.470639191s to LocalClient.Create
	I0819 18:02:30.570100   59547 start.go:167] duration metric: took 22.470711059s to libmachine.API.Create "old-k8s-version-079123"
	I0819 18:02:30.570111   59547 start.go:293] postStartSetup for "old-k8s-version-079123" (driver="kvm2")
	I0819 18:02:30.570124   59547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:02:30.570148   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:02:30.570430   59547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:02:30.570459   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:30.572925   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.573299   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.573348   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.573540   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:02:30.573715   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:30.573931   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:02:30.574085   59547 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa Username:docker}
	I0819 18:02:30.661717   59547 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:02:30.665781   59547 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:02:30.665804   59547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 18:02:30.665861   59547 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 18:02:30.665969   59547 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 18:02:30.666092   59547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:02:30.676297   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:02:30.698555   59547 start.go:296] duration metric: took 128.433431ms for postStartSetup
	I0819 18:02:30.698600   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetConfigRaw
	I0819 18:02:30.699272   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetIP
	I0819 18:02:30.701641   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.702024   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.702049   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.702316   59547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/config.json ...
	I0819 18:02:30.702507   59547 start.go:128] duration metric: took 22.62880065s to createHost
	I0819 18:02:30.702529   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:30.704900   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.705238   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.705260   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.705379   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:02:30.705562   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:30.705707   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:30.705826   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:02:30.706009   59547 main.go:141] libmachine: Using SSH client type: native
	I0819 18:02:30.706211   59547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:02:30.706226   59547 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:02:30.808985   59547 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724090550.780171978
	
	I0819 18:02:30.809005   59547 fix.go:216] guest clock: 1724090550.780171978
	I0819 18:02:30.809012   59547 fix.go:229] Guest: 2024-08-19 18:02:30.780171978 +0000 UTC Remote: 2024-08-19 18:02:30.702519108 +0000 UTC m=+27.945165193 (delta=77.65287ms)
	I0819 18:02:30.809030   59547 fix.go:200] guest clock delta is within tolerance: 77.65287ms
	I0819 18:02:30.809036   59547 start.go:83] releasing machines lock for "old-k8s-version-079123", held for 22.735520293s
	I0819 18:02:30.809058   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:02:30.809365   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetIP
	I0819 18:02:30.812241   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.812655   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.812680   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.812911   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:02:30.813476   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:02:30.813658   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:02:30.813764   59547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:02:30.813805   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:30.813861   59547 ssh_runner.go:195] Run: cat /version.json
	I0819 18:02:30.813891   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:02:30.816271   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.816644   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.816671   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.816763   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.816805   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:02:30.816961   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:30.817148   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:02:30.817184   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:30.817217   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:30.817317   59547 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa Username:docker}
	I0819 18:02:30.817620   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:02:30.817793   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:02:30.817985   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:02:30.818146   59547 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa Username:docker}
	I0819 18:02:30.936699   59547 ssh_runner.go:195] Run: systemctl --version
	I0819 18:02:30.942760   59547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:02:31.102722   59547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:02:31.109559   59547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:02:31.109643   59547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:02:31.131890   59547 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:02:31.131915   59547 start.go:495] detecting cgroup driver to use...
	I0819 18:02:31.131977   59547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:02:31.148085   59547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:02:31.161549   59547 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:02:31.161622   59547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:02:31.178079   59547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:02:31.194129   59547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:02:31.323173   59547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:02:31.476353   59547 docker.go:233] disabling docker service ...
	I0819 18:02:31.476443   59547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:02:31.490108   59547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:02:31.502722   59547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:02:31.644521   59547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:02:31.760484   59547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:02:31.775243   59547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:02:31.792417   59547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 18:02:31.792479   59547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.802435   59547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:02:31.802498   59547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.812262   59547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.822251   59547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:02:31.832352   59547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:02:31.842930   59547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:02:31.852197   59547 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:02:31.852271   59547 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:02:31.864200   59547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:02:31.874124   59547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:02:31.989169   59547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:02:32.140821   59547 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:02:32.140886   59547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:02:32.145247   59547 start.go:563] Will wait 60s for crictl version
	I0819 18:02:32.145324   59547 ssh_runner.go:195] Run: which crictl
	I0819 18:02:32.148818   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:02:32.192916   59547 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:02:32.192998   59547 ssh_runner.go:195] Run: crio --version
	I0819 18:02:32.221185   59547 ssh_runner.go:195] Run: crio --version
	I0819 18:02:32.249943   59547 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 18:02:32.250993   59547 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetIP
	I0819 18:02:32.254029   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:32.254436   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:02:23 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:02:32.254467   59547 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:02:32.254704   59547 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:02:32.258855   59547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:02:32.271596   59547 kubeadm.go:883] updating cluster {Name:old-k8s-version-079123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-079123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:02:32.271692   59547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 18:02:32.271736   59547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:02:32.303829   59547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 18:02:32.303898   59547 ssh_runner.go:195] Run: which lz4
	I0819 18:02:32.307696   59547 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:02:32.311493   59547 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:02:32.311529   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 18:02:33.825734   59547 crio.go:462] duration metric: took 1.518082344s to copy over tarball
	I0819 18:02:33.825803   59547 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:02:36.416294   59547 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.590463479s)
	I0819 18:02:36.416331   59547 crio.go:469] duration metric: took 2.590566331s to extract the tarball
	I0819 18:02:36.416366   59547 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:02:36.458074   59547 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:02:36.500186   59547 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 18:02:36.500218   59547 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 18:02:36.500296   59547 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:02:36.500309   59547 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:02:36.500346   59547 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 18:02:36.500419   59547 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:02:36.500436   59547 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:02:36.500443   59547 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 18:02:36.500538   59547 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:02:36.500829   59547 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 18:02:36.501860   59547 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:02:36.501914   59547 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 18:02:36.502016   59547 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:02:36.502208   59547 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 18:02:36.502015   59547 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 18:02:36.502325   59547 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:02:36.502075   59547 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:02:36.502367   59547 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:02:36.737082   59547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 18:02:36.775696   59547 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 18:02:36.775734   59547 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 18:02:36.775770   59547 ssh_runner.go:195] Run: which crictl
	I0819 18:02:36.780231   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 18:02:36.808266   59547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:02:36.810470   59547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:02:36.813605   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 18:02:36.817995   59547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 18:02:36.849907   59547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 18:02:36.855960   59547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:02:36.860327   59547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:02:36.933308   59547 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 18:02:36.933368   59547 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:02:36.933404   59547 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 18:02:36.933441   59547 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 18:02:36.933477   59547 ssh_runner.go:195] Run: which crictl
	I0819 18:02:36.933414   59547 ssh_runner.go:195] Run: which crictl
	I0819 18:02:36.933411   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 18:02:36.933322   59547 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 18:02:36.933683   59547 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:02:36.933739   59547 ssh_runner.go:195] Run: which crictl
	I0819 18:02:36.988702   59547 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 18:02:36.988745   59547 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 18:02:36.988807   59547 ssh_runner.go:195] Run: which crictl
	I0819 18:02:37.002764   59547 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 18:02:37.002812   59547 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:02:37.002821   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 18:02:37.002834   59547 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 18:02:37.002851   59547 ssh_runner.go:195] Run: which crictl
	I0819 18:02:37.002859   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:02:37.002865   59547 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:02:37.002908   59547 ssh_runner.go:195] Run: which crictl
	I0819 18:02:37.024786   59547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 18:02:37.024849   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:02:37.024882   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 18:02:37.057084   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:02:37.057249   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 18:02:37.087041   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:02:37.087047   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:02:37.150413   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 18:02:37.150432   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:02:37.183270   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 18:02:37.183284   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:02:37.209690   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:02:37.209805   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:02:37.272249   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:02:37.276507   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 18:02:37.297512   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:02:37.309752   59547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 18:02:37.343938   59547 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:02:37.353010   59547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 18:02:37.373481   59547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 18:02:37.386291   59547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 18:02:37.398608   59547 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:02:37.403770   59547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 18:02:37.408319   59547 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 18:02:37.548230   59547 cache_images.go:92] duration metric: took 1.047994436s to LoadCachedImages
	W0819 18:02:37.548326   59547 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0819 18:02:37.548344   59547 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.20.0 crio true true} ...
	I0819 18:02:37.548483   59547 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-079123 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-079123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:02:37.548562   59547 ssh_runner.go:195] Run: crio config
	I0819 18:02:37.618978   59547 cni.go:84] Creating CNI manager for ""
	I0819 18:02:37.619001   59547 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:02:37.619019   59547 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:02:37.619040   59547 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-079123 NodeName:old-k8s-version-079123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 18:02:37.619211   59547 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-079123"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:02:37.619282   59547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 18:02:37.631651   59547 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:02:37.631740   59547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:02:37.643950   59547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 18:02:37.663443   59547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:02:37.683522   59547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 18:02:37.702625   59547 ssh_runner.go:195] Run: grep 192.168.39.246	control-plane.minikube.internal$ /etc/hosts
	I0819 18:02:37.707125   59547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:02:37.722374   59547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:02:37.860026   59547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:02:37.877776   59547 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123 for IP: 192.168.39.246
	I0819 18:02:37.877804   59547 certs.go:194] generating shared ca certs ...
	I0819 18:02:37.877822   59547 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:37.877990   59547 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 18:02:37.878053   59547 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 18:02:37.878066   59547 certs.go:256] generating profile certs ...
	I0819 18:02:37.878118   59547 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.key
	I0819 18:02:37.878130   59547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt with IP's: []
	I0819 18:02:38.138648   59547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt ...
	I0819 18:02:38.138684   59547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: {Name:mka8d5821f3e9757e4890a1f1c9c5fdbb13512de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:38.138871   59547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.key ...
	I0819 18:02:38.138888   59547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.key: {Name:mk94454cb94e8a2ba3403952a1cbb5cc26ec86fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:38.138990   59547 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.key.9240b1b2
	I0819 18:02:38.139011   59547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.crt.9240b1b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246]
	I0819 18:02:38.320250   59547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.crt.9240b1b2 ...
	I0819 18:02:38.320279   59547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.crt.9240b1b2: {Name:mk9387427e1c3bfa019a5a87f51b5ae6c2d7bcf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:38.320443   59547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.key.9240b1b2 ...
	I0819 18:02:38.320459   59547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.key.9240b1b2: {Name:mkca83a1b1d60c9fd7379c5dfaac014cb5c04dd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:38.320551   59547 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.crt.9240b1b2 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.crt
	I0819 18:02:38.320661   59547 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.key.9240b1b2 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.key
	I0819 18:02:38.320784   59547 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.key
	I0819 18:02:38.320807   59547 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.crt with IP's: []
	I0819 18:02:38.454815   59547 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.crt ...
	I0819 18:02:38.454849   59547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.crt: {Name:mka1ac407102f9e0e6eedc0bed85913b004ce0a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:38.455039   59547 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.key ...
	I0819 18:02:38.455059   59547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.key: {Name:mk1c3645cf28eb8332cba8d4c7f2860b6744a62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:02:38.455289   59547 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 18:02:38.455335   59547 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 18:02:38.455354   59547 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:02:38.455389   59547 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:02:38.455418   59547 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:02:38.455453   59547 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 18:02:38.455511   59547 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:02:38.456345   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:02:38.482949   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:02:38.508808   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:02:38.532548   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:02:38.556100   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 18:02:38.580856   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:02:38.604054   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:02:38.626994   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:02:38.649326   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 18:02:38.673308   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 18:02:38.698282   59547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:02:38.721338   59547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:02:38.743734   59547 ssh_runner.go:195] Run: openssl version
	I0819 18:02:38.754476   59547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 18:02:38.771093   59547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 18:02:38.780844   59547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 18:02:38.780922   59547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 18:02:38.790194   59547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 18:02:38.810069   59547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 18:02:38.821061   59547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 18:02:38.825605   59547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 18:02:38.825667   59547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 18:02:38.831204   59547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:02:38.842020   59547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:02:38.852729   59547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:38.857361   59547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:38.857426   59547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:02:38.863141   59547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:02:38.873746   59547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:02:38.877802   59547 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:02:38.877879   59547 kubeadm.go:392] StartCluster: {Name:old-k8s-version-079123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-079123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:02:38.877954   59547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:02:38.877995   59547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:02:38.922138   59547 cri.go:89] found id: ""
	I0819 18:02:38.922205   59547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:02:38.932002   59547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:02:38.941433   59547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:02:38.951695   59547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:02:38.951721   59547 kubeadm.go:157] found existing configuration files:
	
	I0819 18:02:38.951780   59547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:02:38.962663   59547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:02:38.962727   59547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:02:38.972843   59547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:02:38.981549   59547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:02:38.981621   59547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:02:38.990813   59547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:02:39.000156   59547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:02:39.000217   59547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:02:39.013134   59547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:02:39.022086   59547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:02:39.022135   59547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:02:39.031022   59547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:02:39.319232   59547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:04:36.577533   59547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:04:36.577618   59547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:04:36.578944   59547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:04:36.579055   59547 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:04:36.579166   59547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:04:36.579293   59547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:04:36.579433   59547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:04:36.579515   59547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:04:36.581252   59547 out.go:235]   - Generating certificates and keys ...
	I0819 18:04:36.581333   59547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:04:36.581419   59547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:04:36.581526   59547 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 18:04:36.581602   59547 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 18:04:36.581685   59547 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 18:04:36.581765   59547 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 18:04:36.581844   59547 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 18:04:36.582026   59547 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-079123] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0819 18:04:36.582094   59547 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 18:04:36.582279   59547 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-079123] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0819 18:04:36.582368   59547 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 18:04:36.582448   59547 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 18:04:36.582488   59547 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 18:04:36.582552   59547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:04:36.582605   59547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:04:36.582649   59547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:04:36.582730   59547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:04:36.582811   59547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:04:36.582966   59547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:04:36.583037   59547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:04:36.583071   59547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:04:36.583130   59547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:04:36.585495   59547 out.go:235]   - Booting up control plane ...
	I0819 18:04:36.585571   59547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:04:36.585634   59547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:04:36.585704   59547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:04:36.585772   59547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:04:36.585926   59547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:04:36.585991   59547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:04:36.586065   59547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:04:36.586237   59547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:04:36.586305   59547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:04:36.586499   59547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:04:36.586568   59547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:04:36.586729   59547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:04:36.586788   59547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:04:36.586954   59547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:04:36.587029   59547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:04:36.587190   59547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:04:36.587197   59547 kubeadm.go:310] 
	I0819 18:04:36.587240   59547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:04:36.587284   59547 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:04:36.587291   59547 kubeadm.go:310] 
	I0819 18:04:36.587320   59547 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:04:36.587349   59547 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:04:36.587443   59547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:04:36.587455   59547 kubeadm.go:310] 
	I0819 18:04:36.587557   59547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:04:36.587596   59547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:04:36.587622   59547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:04:36.587631   59547 kubeadm.go:310] 
	I0819 18:04:36.587739   59547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:04:36.587812   59547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:04:36.587822   59547 kubeadm.go:310] 
	I0819 18:04:36.587914   59547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:04:36.587988   59547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:04:36.588066   59547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:04:36.588130   59547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:04:36.588180   59547 kubeadm.go:310] 
	W0819 18:04:36.588256   59547 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-079123] and IPs [192.168.39.246 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-079123] and IPs [192.168.39.246 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-079123] and IPs [192.168.39.246 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-079123] and IPs [192.168.39.246 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 18:04:36.588290   59547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:04:38.120267   59547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.531955978s)
	I0819 18:04:38.120348   59547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:04:38.134667   59547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:04:38.143762   59547 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:04:38.143783   59547 kubeadm.go:157] found existing configuration files:
	
	I0819 18:04:38.143829   59547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:04:38.152516   59547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:04:38.152576   59547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:04:38.161636   59547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:04:38.170419   59547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:04:38.170490   59547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:04:38.179800   59547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:04:38.188471   59547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:04:38.188529   59547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:04:38.197523   59547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:04:38.206122   59547 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:04:38.206191   59547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:04:38.215076   59547 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:04:38.289087   59547 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:04:38.289144   59547 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:04:38.427711   59547 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:04:38.427846   59547 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:04:38.427968   59547 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:04:38.596068   59547 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:04:38.597865   59547 out.go:235]   - Generating certificates and keys ...
	I0819 18:04:38.597961   59547 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:04:38.598053   59547 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:04:38.598190   59547 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:04:38.598302   59547 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:04:38.598424   59547 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:04:38.598515   59547 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:04:38.599150   59547 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:04:38.599955   59547 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:04:38.600441   59547 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:04:38.601066   59547 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:04:38.601191   59547 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:04:38.601268   59547 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:04:38.687878   59547 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:04:38.976744   59547 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:04:39.215077   59547 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:04:39.311669   59547 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:04:39.325449   59547 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:04:39.326560   59547 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:04:39.326638   59547 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:04:39.460548   59547 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:04:39.462330   59547 out.go:235]   - Booting up control plane ...
	I0819 18:04:39.462451   59547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:04:39.465407   59547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:04:39.466703   59547 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:04:39.468438   59547 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:04:39.472725   59547 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:05:19.475640   59547 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:05:19.475756   59547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:05:19.475966   59547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:05:24.476434   59547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:05:24.476613   59547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:05:34.477460   59547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:05:34.477670   59547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:05:54.476964   59547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:05:54.477159   59547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:06:34.477015   59547 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:06:34.477290   59547 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:06:34.477309   59547 kubeadm.go:310] 
	I0819 18:06:34.477365   59547 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:06:34.477418   59547 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:06:34.477430   59547 kubeadm.go:310] 
	I0819 18:06:34.477490   59547 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:06:34.477532   59547 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:06:34.477672   59547 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:06:34.477683   59547 kubeadm.go:310] 
	I0819 18:06:34.477829   59547 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:06:34.477875   59547 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:06:34.477914   59547 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:06:34.477922   59547 kubeadm.go:310] 
	I0819 18:06:34.478035   59547 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:06:34.478127   59547 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:06:34.478136   59547 kubeadm.go:310] 
	I0819 18:06:34.478272   59547 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:06:34.478393   59547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:06:34.478490   59547 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:06:34.478584   59547 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:06:34.478595   59547 kubeadm.go:310] 
	I0819 18:06:34.479340   59547 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:06:34.479467   59547 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:06:34.479571   59547 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:06:34.479649   59547 kubeadm.go:394] duration metric: took 3m55.601774361s to StartCluster
	I0819 18:06:34.479701   59547 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:06:34.479769   59547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:06:34.524319   59547 cri.go:89] found id: ""
	I0819 18:06:34.524369   59547 logs.go:276] 0 containers: []
	W0819 18:06:34.524381   59547 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:06:34.524390   59547 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:06:34.524462   59547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:06:34.568771   59547 cri.go:89] found id: ""
	I0819 18:06:34.568798   59547 logs.go:276] 0 containers: []
	W0819 18:06:34.568808   59547 logs.go:278] No container was found matching "etcd"
	I0819 18:06:34.568815   59547 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:06:34.568873   59547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:06:34.611478   59547 cri.go:89] found id: ""
	I0819 18:06:34.611509   59547 logs.go:276] 0 containers: []
	W0819 18:06:34.611519   59547 logs.go:278] No container was found matching "coredns"
	I0819 18:06:34.611526   59547 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:06:34.611591   59547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:06:34.650596   59547 cri.go:89] found id: ""
	I0819 18:06:34.650623   59547 logs.go:276] 0 containers: []
	W0819 18:06:34.650634   59547 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:06:34.650641   59547 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:06:34.650707   59547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:06:34.697426   59547 cri.go:89] found id: ""
	I0819 18:06:34.697449   59547 logs.go:276] 0 containers: []
	W0819 18:06:34.697457   59547 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:06:34.697463   59547 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:06:34.697503   59547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:06:34.748234   59547 cri.go:89] found id: ""
	I0819 18:06:34.748253   59547 logs.go:276] 0 containers: []
	W0819 18:06:34.748261   59547 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:06:34.748266   59547 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:06:34.748306   59547 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:06:34.796392   59547 cri.go:89] found id: ""
	I0819 18:06:34.796421   59547 logs.go:276] 0 containers: []
	W0819 18:06:34.796432   59547 logs.go:278] No container was found matching "kindnet"
	I0819 18:06:34.796444   59547 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:06:34.796460   59547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:06:34.954950   59547 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:06:34.954976   59547 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:06:34.954993   59547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:06:35.074640   59547 logs.go:123] Gathering logs for container status ...
	I0819 18:06:35.074674   59547 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:06:35.118237   59547 logs.go:123] Gathering logs for kubelet ...
	I0819 18:06:35.118263   59547 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:06:35.166037   59547 logs.go:123] Gathering logs for dmesg ...
	I0819 18:06:35.166068   59547 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0819 18:06:35.184559   59547 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 18:06:35.184630   59547 out.go:270] * 
	* 
	W0819 18:06:35.184699   59547 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:06:35.184717   59547 out.go:270] * 
	* 
	W0819 18:06:35.185878   59547 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:06:35.189336   59547 out.go:201] 
	W0819 18:06:35.190680   59547 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:06:35.190729   59547 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 18:06:35.190757   59547 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 18:06:35.192297   59547 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-079123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 6 (210.687251ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:06:35.450273   62344 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-079123" does not appear in /home/jenkins/minikube-integration/19478-10654/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-079123" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (272.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-233969 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-233969 --alsologtostderr -v=3: exit status 82 (2m0.486698943s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-233969"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:03:48.702407   61024 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:03:48.702727   61024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:03:48.702738   61024 out.go:358] Setting ErrFile to fd 2...
	I0819 18:03:48.702744   61024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:03:48.702988   61024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:03:48.703209   61024 out.go:352] Setting JSON to false
	I0819 18:03:48.703281   61024 mustload.go:65] Loading cluster: no-preload-233969
	I0819 18:03:48.703583   61024 config.go:182] Loaded profile config "no-preload-233969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:03:48.703649   61024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/config.json ...
	I0819 18:03:48.703811   61024 mustload.go:65] Loading cluster: no-preload-233969
	I0819 18:03:48.703908   61024 config.go:182] Loaded profile config "no-preload-233969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:03:48.703939   61024 stop.go:39] StopHost: no-preload-233969
	I0819 18:03:48.704351   61024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:03:48.704403   61024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:03:48.718796   61024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0819 18:03:48.719229   61024 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:03:48.720027   61024 main.go:141] libmachine: Using API Version  1
	I0819 18:03:48.720058   61024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:03:48.720515   61024 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:03:48.723010   61024 out.go:177] * Stopping node "no-preload-233969"  ...
	I0819 18:03:48.724172   61024 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 18:03:48.724211   61024 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:03:48.724491   61024 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 18:03:48.724518   61024 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:03:48.727119   61024 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:03:48.727504   61024 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:02:45 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:03:48.727535   61024 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:03:48.727729   61024 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:03:48.727905   61024 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:03:48.728055   61024 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:03:48.728206   61024 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:03:48.812290   61024 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 18:03:48.871076   61024 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 18:03:48.937237   61024 main.go:141] libmachine: Stopping "no-preload-233969"...
	I0819 18:03:48.937270   61024 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:03:48.938933   61024 main.go:141] libmachine: (no-preload-233969) Calling .Stop
	I0819 18:03:48.942952   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 0/120
	I0819 18:03:49.944487   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 1/120
	I0819 18:03:50.945985   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 2/120
	I0819 18:03:51.947516   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 3/120
	I0819 18:03:52.948845   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 4/120
	I0819 18:03:53.950988   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 5/120
	I0819 18:03:54.952743   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 6/120
	I0819 18:03:55.954224   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 7/120
	I0819 18:03:56.955728   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 8/120
	I0819 18:03:57.958126   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 9/120
	I0819 18:03:58.959408   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 10/120
	I0819 18:03:59.961068   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 11/120
	I0819 18:04:00.962500   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 12/120
	I0819 18:04:01.963973   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 13/120
	I0819 18:04:02.965579   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 14/120
	I0819 18:04:03.967277   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 15/120
	I0819 18:04:04.968562   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 16/120
	I0819 18:04:05.969728   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 17/120
	I0819 18:04:06.971054   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 18/120
	I0819 18:04:07.972349   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 19/120
	I0819 18:04:08.974632   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 20/120
	I0819 18:04:09.976168   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 21/120
	I0819 18:04:10.977420   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 22/120
	I0819 18:04:11.978818   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 23/120
	I0819 18:04:12.980201   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 24/120
	I0819 18:04:13.981868   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 25/120
	I0819 18:04:14.983077   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 26/120
	I0819 18:04:15.984255   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 27/120
	I0819 18:04:16.985484   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 28/120
	I0819 18:04:17.986807   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 29/120
	I0819 18:04:18.988983   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 30/120
	I0819 18:04:19.990582   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 31/120
	I0819 18:04:20.991847   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 32/120
	I0819 18:04:21.993516   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 33/120
	I0819 18:04:22.994763   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 34/120
	I0819 18:04:23.996807   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 35/120
	I0819 18:04:24.998207   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 36/120
	I0819 18:04:25.999690   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 37/120
	I0819 18:04:27.001360   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 38/120
	I0819 18:04:28.002734   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 39/120
	I0819 18:04:29.004997   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 40/120
	I0819 18:04:30.006309   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 41/120
	I0819 18:04:31.007590   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 42/120
	I0819 18:04:32.008814   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 43/120
	I0819 18:04:33.010378   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 44/120
	I0819 18:04:34.012266   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 45/120
	I0819 18:04:35.013952   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 46/120
	I0819 18:04:36.015299   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 47/120
	I0819 18:04:37.016482   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 48/120
	I0819 18:04:38.017988   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 49/120
	I0819 18:04:39.020367   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 50/120
	I0819 18:04:40.021652   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 51/120
	I0819 18:04:41.023203   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 52/120
	I0819 18:04:42.024788   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 53/120
	I0819 18:04:43.026068   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 54/120
	I0819 18:04:44.027957   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 55/120
	I0819 18:04:45.029426   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 56/120
	I0819 18:04:46.030808   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 57/120
	I0819 18:04:47.032167   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 58/120
	I0819 18:04:48.033393   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 59/120
	I0819 18:04:49.034554   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 60/120
	I0819 18:04:50.036037   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 61/120
	I0819 18:04:51.037282   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 62/120
	I0819 18:04:52.038704   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 63/120
	I0819 18:04:53.040001   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 64/120
	I0819 18:04:54.042086   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 65/120
	I0819 18:04:55.043387   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 66/120
	I0819 18:04:56.045138   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 67/120
	I0819 18:04:57.047317   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 68/120
	I0819 18:04:58.048688   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 69/120
	I0819 18:04:59.051022   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 70/120
	I0819 18:05:00.052510   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 71/120
	I0819 18:05:01.053952   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 72/120
	I0819 18:05:02.055295   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 73/120
	I0819 18:05:03.056690   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 74/120
	I0819 18:05:04.058746   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 75/120
	I0819 18:05:05.060533   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 76/120
	I0819 18:05:06.061995   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 77/120
	I0819 18:05:07.063633   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 78/120
	I0819 18:05:08.065104   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 79/120
	I0819 18:05:09.067594   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 80/120
	I0819 18:05:10.069042   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 81/120
	I0819 18:05:11.070825   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 82/120
	I0819 18:05:12.072133   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 83/120
	I0819 18:05:13.074319   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 84/120
	I0819 18:05:14.076283   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 85/120
	I0819 18:05:15.077774   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 86/120
	I0819 18:05:16.079203   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 87/120
	I0819 18:05:17.080631   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 88/120
	I0819 18:05:18.082364   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 89/120
	I0819 18:05:19.084679   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 90/120
	I0819 18:05:20.086232   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 91/120
	I0819 18:05:21.087709   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 92/120
	I0819 18:05:22.089416   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 93/120
	I0819 18:05:23.090912   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 94/120
	I0819 18:05:24.093035   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 95/120
	I0819 18:05:25.095408   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 96/120
	I0819 18:05:26.097187   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 97/120
	I0819 18:05:27.098665   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 98/120
	I0819 18:05:28.100241   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 99/120
	I0819 18:05:29.102696   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 100/120
	I0819 18:05:30.103975   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 101/120
	I0819 18:05:31.106552   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 102/120
	I0819 18:05:32.107997   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 103/120
	I0819 18:05:33.109959   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 104/120
	I0819 18:05:34.111894   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 105/120
	I0819 18:05:35.113555   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 106/120
	I0819 18:05:36.115177   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 107/120
	I0819 18:05:37.116447   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 108/120
	I0819 18:05:38.118031   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 109/120
	I0819 18:05:39.120180   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 110/120
	I0819 18:05:40.121571   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 111/120
	I0819 18:05:41.122876   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 112/120
	I0819 18:05:42.124376   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 113/120
	I0819 18:05:43.125737   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 114/120
	I0819 18:05:44.127950   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 115/120
	I0819 18:05:45.129621   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 116/120
	I0819 18:05:46.131027   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 117/120
	I0819 18:05:47.132245   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 118/120
	I0819 18:05:48.133733   61024 main.go:141] libmachine: (no-preload-233969) Waiting for machine to stop 119/120
	I0819 18:05:49.134239   61024 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 18:05:49.134321   61024 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 18:05:49.136182   61024 out.go:201] 
	W0819 18:05:49.137576   61024 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 18:05:49.137601   61024 out.go:270] * 
	* 
	W0819 18:05:49.141498   61024 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:05:49.142863   61024 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-233969 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233969 -n no-preload-233969
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233969 -n no-preload-233969: exit status 3 (18.514863096s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:06:07.661168   61672 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.8:22: connect: no route to host
	E0819 18:06:07.661188   61672 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.8:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-233969" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-813424 --alsologtostderr -v=3
E0819 18:05:21.263213   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-813424 --alsologtostderr -v=3: exit status 82 (2m0.543320789s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-813424"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:04:27.516094   61317 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:04:27.516208   61317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:04:27.516216   61317 out.go:358] Setting ErrFile to fd 2...
	I0819 18:04:27.516220   61317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:04:27.516394   61317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:04:27.516599   61317 out.go:352] Setting JSON to false
	I0819 18:04:27.516672   61317 mustload.go:65] Loading cluster: default-k8s-diff-port-813424
	I0819 18:04:27.517024   61317 config.go:182] Loaded profile config "default-k8s-diff-port-813424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:04:27.517091   61317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/config.json ...
	I0819 18:04:27.517267   61317 mustload.go:65] Loading cluster: default-k8s-diff-port-813424
	I0819 18:04:27.517367   61317 config.go:182] Loaded profile config "default-k8s-diff-port-813424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:04:27.517399   61317 stop.go:39] StopHost: default-k8s-diff-port-813424
	I0819 18:04:27.517791   61317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:04:27.517826   61317 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:04:27.532734   61317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46627
	I0819 18:04:27.533221   61317 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:04:27.533890   61317 main.go:141] libmachine: Using API Version  1
	I0819 18:04:27.533913   61317 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:04:27.534299   61317 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:04:27.536605   61317 out.go:177] * Stopping node "default-k8s-diff-port-813424"  ...
	I0819 18:04:27.538003   61317 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 18:04:27.538044   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Calling .DriverName
	I0819 18:04:27.538276   61317 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 18:04:27.538305   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Calling .GetSSHHostname
	I0819 18:04:27.541588   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) DBG | domain default-k8s-diff-port-813424 has defined MAC address 52:54:00:8e:69:02 in network mk-default-k8s-diff-port-813424
	I0819 18:04:27.542032   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:69:02", ip: ""} in network mk-default-k8s-diff-port-813424: {Iface:virbr3 ExpiryTime:2024-08-19 19:03:06 +0000 UTC Type:0 Mac:52:54:00:8e:69:02 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:default-k8s-diff-port-813424 Clientid:01:52:54:00:8e:69:02}
	I0819 18:04:27.542061   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) DBG | domain default-k8s-diff-port-813424 has defined IP address 192.168.61.243 and MAC address 52:54:00:8e:69:02 in network mk-default-k8s-diff-port-813424
	I0819 18:04:27.542228   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Calling .GetSSHPort
	I0819 18:04:27.542430   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Calling .GetSSHKeyPath
	I0819 18:04:27.542613   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Calling .GetSSHUsername
	I0819 18:04:27.542815   61317 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/default-k8s-diff-port-813424/id_rsa Username:docker}
	I0819 18:04:27.672912   61317 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 18:04:27.731043   61317 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 18:04:27.806911   61317 main.go:141] libmachine: Stopping "default-k8s-diff-port-813424"...
	I0819 18:04:27.806974   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Calling .GetState
	I0819 18:04:27.808315   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Calling .Stop
	I0819 18:04:27.811820   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 0/120
	I0819 18:04:28.813440   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 1/120
	I0819 18:04:29.814864   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 2/120
	I0819 18:04:30.816244   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 3/120
	I0819 18:04:31.817732   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 4/120
	I0819 18:04:32.819800   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 5/120
	I0819 18:04:33.821195   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 6/120
	I0819 18:04:34.822491   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 7/120
	I0819 18:04:35.823734   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 8/120
	I0819 18:04:36.825144   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 9/120
	I0819 18:04:37.827245   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 10/120
	I0819 18:04:38.828679   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 11/120
	I0819 18:04:39.829984   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 12/120
	I0819 18:04:40.831611   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 13/120
	I0819 18:04:41.832941   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 14/120
	I0819 18:04:42.834932   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 15/120
	I0819 18:04:43.836472   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 16/120
	I0819 18:04:44.837792   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 17/120
	I0819 18:04:45.839038   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 18/120
	I0819 18:04:46.840335   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 19/120
	I0819 18:04:47.842727   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 20/120
	I0819 18:04:48.844027   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 21/120
	I0819 18:04:49.845335   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 22/120
	I0819 18:04:50.847026   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 23/120
	I0819 18:04:51.848227   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 24/120
	I0819 18:04:52.850383   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 25/120
	I0819 18:04:53.851734   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 26/120
	I0819 18:04:54.853054   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 27/120
	I0819 18:04:55.854306   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 28/120
	I0819 18:04:56.855611   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 29/120
	I0819 18:04:57.857906   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 30/120
	I0819 18:04:58.859153   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 31/120
	I0819 18:04:59.860597   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 32/120
	I0819 18:05:00.861942   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 33/120
	I0819 18:05:01.863417   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 34/120
	I0819 18:05:02.865487   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 35/120
	I0819 18:05:03.866804   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 36/120
	I0819 18:05:04.868307   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 37/120
	I0819 18:05:05.869628   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 38/120
	I0819 18:05:06.871247   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 39/120
	I0819 18:05:07.873420   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 40/120
	I0819 18:05:08.874696   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 41/120
	I0819 18:05:09.875977   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 42/120
	I0819 18:05:10.877224   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 43/120
	I0819 18:05:11.878538   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 44/120
	I0819 18:05:12.880461   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 45/120
	I0819 18:05:13.881776   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 46/120
	I0819 18:05:14.883029   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 47/120
	I0819 18:05:15.884301   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 48/120
	I0819 18:05:16.885667   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 49/120
	I0819 18:05:17.887877   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 50/120
	I0819 18:05:18.889396   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 51/120
	I0819 18:05:19.891423   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 52/120
	I0819 18:05:20.892819   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 53/120
	I0819 18:05:21.894982   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 54/120
	I0819 18:05:22.896947   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 55/120
	I0819 18:05:23.898290   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 56/120
	I0819 18:05:24.899851   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 57/120
	I0819 18:05:25.901375   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 58/120
	I0819 18:05:26.903108   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 59/120
	I0819 18:05:27.905589   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 60/120
	I0819 18:05:28.907064   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 61/120
	I0819 18:05:29.909469   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 62/120
	I0819 18:05:30.911180   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 63/120
	I0819 18:05:31.912514   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 64/120
	I0819 18:05:32.914686   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 65/120
	I0819 18:05:33.916254   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 66/120
	I0819 18:05:34.917557   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 67/120
	I0819 18:05:35.919052   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 68/120
	I0819 18:05:36.920424   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 69/120
	I0819 18:05:37.921833   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 70/120
	I0819 18:05:38.923118   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 71/120
	I0819 18:05:39.924322   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 72/120
	I0819 18:05:40.926443   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 73/120
	I0819 18:05:41.927862   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 74/120
	I0819 18:05:42.929978   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 75/120
	I0819 18:05:43.931920   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 76/120
	I0819 18:05:44.933325   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 77/120
	I0819 18:05:45.935363   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 78/120
	I0819 18:05:46.936718   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 79/120
	I0819 18:05:47.938915   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 80/120
	I0819 18:05:48.940460   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 81/120
	I0819 18:05:49.941973   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 82/120
	I0819 18:05:50.943377   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 83/120
	I0819 18:05:51.944975   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 84/120
	I0819 18:05:52.947226   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 85/120
	I0819 18:05:53.948811   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 86/120
	I0819 18:05:54.950295   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 87/120
	I0819 18:05:55.951639   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 88/120
	I0819 18:05:56.953170   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 89/120
	I0819 18:05:57.954952   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 90/120
	I0819 18:05:58.956483   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 91/120
	I0819 18:05:59.958156   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 92/120
	I0819 18:06:00.960016   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 93/120
	I0819 18:06:01.961652   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 94/120
	I0819 18:06:02.963700   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 95/120
	I0819 18:06:03.965226   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 96/120
	I0819 18:06:04.966826   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 97/120
	I0819 18:06:05.968154   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 98/120
	I0819 18:06:06.969918   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 99/120
	I0819 18:06:07.972310   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 100/120
	I0819 18:06:08.973809   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 101/120
	I0819 18:06:09.976096   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 102/120
	I0819 18:06:10.977668   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 103/120
	I0819 18:06:11.979190   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 104/120
	I0819 18:06:12.981403   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 105/120
	I0819 18:06:13.983414   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 106/120
	I0819 18:06:14.985137   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 107/120
	I0819 18:06:15.987383   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 108/120
	I0819 18:06:16.988618   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 109/120
	I0819 18:06:17.990716   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 110/120
	I0819 18:06:18.992451   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 111/120
	I0819 18:06:19.993942   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 112/120
	I0819 18:06:20.995275   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 113/120
	I0819 18:06:21.997325   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 114/120
	I0819 18:06:22.998939   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 115/120
	I0819 18:06:24.000939   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 116/120
	I0819 18:06:25.003339   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 117/120
	I0819 18:06:26.005195   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 118/120
	I0819 18:06:27.006573   61317 main.go:141] libmachine: (default-k8s-diff-port-813424) Waiting for machine to stop 119/120
	I0819 18:06:28.007767   61317 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 18:06:28.007844   61317 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 18:06:28.009763   61317 out.go:201] 
	W0819 18:06:28.011304   61317 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 18:06:28.011328   61317 out.go:270] * 
	* 
	W0819 18:06:28.013993   61317 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:06:28.015394   61317 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-813424 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424: exit status 3 (18.555165355s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:06:46.573146   62211 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host
	E0819 18:06:46.573164   62211 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-813424" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233969 -n no-preload-233969
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233969 -n no-preload-233969: exit status 3 (3.167717265s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:06:10.829139   62002 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.8:22: connect: no route to host
	E0819 18:06:10.829165   62002 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.8:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-233969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-233969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153609397s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.8:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-233969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233969 -n no-preload-233969
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233969 -n no-preload-233969: exit status 3 (3.061670158s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:06:20.045059   62108 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.8:22: connect: no route to host
	E0819 18:06:20.045084   62108 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.8:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-233969" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-079123 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-079123 create -f testdata/busybox.yaml: exit status 1 (41.590038ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-079123" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-079123 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 6 (209.890919ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:06:35.703043   62384 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-079123" does not appear in /home/jenkins/minikube-integration/19478-10654/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-079123" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 6 (211.256597ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:06:35.915123   62414 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-079123" does not appear in /home/jenkins/minikube-integration/19478-10654/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-079123" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-079123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-079123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m46.078798289s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-079123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-079123 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-079123 describe deploy/metrics-server -n kube-system: exit status 1 (45.801357ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-079123" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-079123 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 6 (216.487306ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:08:22.255873   63096 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-079123" does not appear in /home/jenkins/minikube-integration/19478-10654/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-079123" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424: exit status 3 (3.167629212s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:06:49.741103   62621 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host
	E0819 18:06:49.741124   62621 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-813424 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-813424 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15213836s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-813424 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424: exit status 3 (3.063803038s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:06:58.957158   62703 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host
	E0819 18:06:58.957181   62703 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-813424" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (703.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-079123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0819 18:10:21.264075   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-079123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m42.323722929s)

                                                
                                                
-- stdout --
	* [old-k8s-version-079123] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-079123" primary control-plane node in "old-k8s-version-079123" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-079123" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:08:24.756512   63216 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:08:24.756676   63216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:24.756686   63216 out.go:358] Setting ErrFile to fd 2...
	I0819 18:08:24.756692   63216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:08:24.756941   63216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:08:24.757516   63216 out.go:352] Setting JSON to false
	I0819 18:08:24.758408   63216 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6650,"bootTime":1724084255,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:08:24.758468   63216 start.go:139] virtualization: kvm guest
	I0819 18:08:24.760637   63216 out.go:177] * [old-k8s-version-079123] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:08:24.762269   63216 notify.go:220] Checking for updates...
	I0819 18:08:24.762299   63216 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:08:24.763679   63216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:08:24.765033   63216 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:08:24.766379   63216 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:08:24.767657   63216 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:08:24.768889   63216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:08:24.770520   63216 config.go:182] Loaded profile config "old-k8s-version-079123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 18:08:24.771138   63216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:24.771198   63216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:24.785838   63216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0819 18:08:24.786203   63216 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:24.786810   63216 main.go:141] libmachine: Using API Version  1
	I0819 18:08:24.786829   63216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:24.787122   63216 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:24.787295   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:08:24.789137   63216 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 18:08:24.790182   63216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:08:24.790481   63216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:08:24.790528   63216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:08:24.804930   63216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0819 18:08:24.805349   63216 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:08:24.805825   63216 main.go:141] libmachine: Using API Version  1
	I0819 18:08:24.805844   63216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:08:24.806124   63216 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:08:24.806314   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:08:24.840279   63216 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:08:24.841443   63216 start.go:297] selected driver: kvm2
	I0819 18:08:24.841461   63216 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-079123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-079123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:08:24.841586   63216 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:08:24.842362   63216 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:08:24.842466   63216 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:08:24.857020   63216 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:08:24.857375   63216 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:08:24.857443   63216 cni.go:84] Creating CNI manager for ""
	I0819 18:08:24.857456   63216 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:08:24.857491   63216 start.go:340] cluster config:
	{Name:old-k8s-version-079123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-079123 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:08:24.857587   63216 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:08:24.859352   63216 out.go:177] * Starting "old-k8s-version-079123" primary control-plane node in "old-k8s-version-079123" cluster
	I0819 18:08:24.860491   63216 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 18:08:24.860519   63216 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:08:24.860528   63216 cache.go:56] Caching tarball of preloaded images
	I0819 18:08:24.860611   63216 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:08:24.860622   63216 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 18:08:24.860718   63216 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/config.json ...
	I0819 18:08:24.860931   63216 start.go:360] acquireMachinesLock for old-k8s-version-079123: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:11:36.701533   63216 start.go:364] duration metric: took 3m11.840571562s to acquireMachinesLock for "old-k8s-version-079123"
	I0819 18:11:36.701604   63216 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:11:36.701615   63216 fix.go:54] fixHost starting: 
	I0819 18:11:36.702048   63216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:11:36.702088   63216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:11:36.718580   63216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I0819 18:11:36.719037   63216 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:11:36.719563   63216 main.go:141] libmachine: Using API Version  1
	I0819 18:11:36.719585   63216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:11:36.719920   63216 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:11:36.720081   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:11:36.720238   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetState
	I0819 18:11:36.721740   63216 fix.go:112] recreateIfNeeded on old-k8s-version-079123: state=Stopped err=<nil>
	I0819 18:11:36.721780   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	W0819 18:11:36.721935   63216 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:11:36.724138   63216 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-079123" ...
	I0819 18:11:36.725479   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .Start
	I0819 18:11:36.725657   63216 main.go:141] libmachine: (old-k8s-version-079123) Ensuring networks are active...
	I0819 18:11:36.726386   63216 main.go:141] libmachine: (old-k8s-version-079123) Ensuring network default is active
	I0819 18:11:36.726713   63216 main.go:141] libmachine: (old-k8s-version-079123) Ensuring network mk-old-k8s-version-079123 is active
	I0819 18:11:36.727034   63216 main.go:141] libmachine: (old-k8s-version-079123) Getting domain xml...
	I0819 18:11:36.727756   63216 main.go:141] libmachine: (old-k8s-version-079123) Creating domain...
	I0819 18:11:37.974480   63216 main.go:141] libmachine: (old-k8s-version-079123) Waiting to get IP...
	I0819 18:11:37.975668   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:37.976085   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:37.976148   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:37.976060   64608 retry.go:31] will retry after 306.014442ms: waiting for machine to come up
	I0819 18:11:38.283628   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:38.284020   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:38.284047   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:38.283989   64608 retry.go:31] will retry after 366.017951ms: waiting for machine to come up
	I0819 18:11:38.651653   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:38.652294   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:38.652321   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:38.652255   64608 retry.go:31] will retry after 422.728578ms: waiting for machine to come up
	I0819 18:11:39.077238   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:39.077773   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:39.077805   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:39.077741   64608 retry.go:31] will retry after 418.112232ms: waiting for machine to come up
	I0819 18:11:39.497394   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:39.497855   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:39.497888   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:39.497809   64608 retry.go:31] will retry after 634.196719ms: waiting for machine to come up
	I0819 18:11:40.133748   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:40.134295   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:40.134326   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:40.134243   64608 retry.go:31] will retry after 782.867919ms: waiting for machine to come up
	I0819 18:11:40.918914   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:40.919393   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:40.919424   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:40.919328   64608 retry.go:31] will retry after 792.848589ms: waiting for machine to come up
	I0819 18:11:41.713780   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:41.714230   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:41.714254   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:41.714196   64608 retry.go:31] will retry after 1.063989182s: waiting for machine to come up
	I0819 18:11:42.779926   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:42.780325   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:42.780351   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:42.780298   64608 retry.go:31] will retry after 1.62686057s: waiting for machine to come up
	I0819 18:11:44.409007   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:44.409485   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:44.409518   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:44.409420   64608 retry.go:31] will retry after 2.134375562s: waiting for machine to come up
	I0819 18:11:46.546077   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:46.546616   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:46.546642   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:46.546569   64608 retry.go:31] will retry after 1.880495373s: waiting for machine to come up
	I0819 18:11:48.429561   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:48.430011   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:48.430044   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:48.429964   64608 retry.go:31] will retry after 2.863529505s: waiting for machine to come up
	I0819 18:11:51.297291   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:51.297778   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | unable to find current IP address of domain old-k8s-version-079123 in network mk-old-k8s-version-079123
	I0819 18:11:51.297808   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | I0819 18:11:51.297736   64608 retry.go:31] will retry after 3.891825508s: waiting for machine to come up
	I0819 18:11:55.193973   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.194472   63216 main.go:141] libmachine: (old-k8s-version-079123) Found IP for machine: 192.168.39.246
	I0819 18:11:55.194499   63216 main.go:141] libmachine: (old-k8s-version-079123) Reserving static IP address...
	I0819 18:11:55.194514   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has current primary IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.194981   63216 main.go:141] libmachine: (old-k8s-version-079123) Reserved static IP address: 192.168.39.246
	I0819 18:11:55.195001   63216 main.go:141] libmachine: (old-k8s-version-079123) Waiting for SSH to be available...
	I0819 18:11:55.195017   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "old-k8s-version-079123", mac: "52:54:00:97:ce:99", ip: "192.168.39.246"} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:55.195041   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | skip adding static IP to network mk-old-k8s-version-079123 - found existing host DHCP lease matching {name: "old-k8s-version-079123", mac: "52:54:00:97:ce:99", ip: "192.168.39.246"}
	I0819 18:11:55.195060   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | Getting to WaitForSSH function...
	I0819 18:11:55.197338   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.197725   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:55.197752   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.197958   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | Using SSH client type: external
	I0819 18:11:55.197986   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa (-rw-------)
	I0819 18:11:55.198019   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:11:55.198042   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | About to run SSH command:
	I0819 18:11:55.198054   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | exit 0
	I0819 18:11:55.324964   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | SSH cmd err, output: <nil>: 
	I0819 18:11:55.325282   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetConfigRaw
	I0819 18:11:55.325941   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetIP
	I0819 18:11:55.328992   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.329339   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:55.329370   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.329729   63216 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/config.json ...
	I0819 18:11:55.329976   63216 machine.go:93] provisionDockerMachine start ...
	I0819 18:11:55.329995   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:11:55.330219   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:11:55.332767   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.333107   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:55.333129   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.333256   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:11:55.333429   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:55.333565   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:55.333713   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:11:55.333882   63216 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:55.334054   63216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:11:55.334065   63216 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:11:55.441006   63216 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 18:11:55.441036   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetMachineName
	I0819 18:11:55.441269   63216 buildroot.go:166] provisioning hostname "old-k8s-version-079123"
	I0819 18:11:55.441295   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetMachineName
	I0819 18:11:55.441478   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:11:55.444311   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.444620   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:55.444670   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.444837   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:11:55.444998   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:55.445157   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:55.445254   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:11:55.445399   63216 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:55.445561   63216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:11:55.445573   63216 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-079123 && echo "old-k8s-version-079123" | sudo tee /etc/hostname
	I0819 18:11:55.570220   63216 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-079123
	
	I0819 18:11:55.570248   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:11:55.573000   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.573287   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:55.573320   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.573488   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:11:55.573657   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:55.573843   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:55.573964   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:11:55.574127   63216 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:55.574349   63216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:11:55.574375   63216 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-079123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-079123/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-079123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:11:55.689117   63216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:11:55.689149   63216 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 18:11:55.689188   63216 buildroot.go:174] setting up certificates
	I0819 18:11:55.689197   63216 provision.go:84] configureAuth start
	I0819 18:11:55.689206   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetMachineName
	I0819 18:11:55.689473   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetIP
	I0819 18:11:55.692008   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.692351   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:55.692378   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.692562   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:11:55.694554   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.694857   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:55.694887   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.695014   63216 provision.go:143] copyHostCerts
	I0819 18:11:55.695081   63216 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 18:11:55.695106   63216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 18:11:55.695173   63216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 18:11:55.695286   63216 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 18:11:55.695299   63216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 18:11:55.695356   63216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 18:11:55.695443   63216 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 18:11:55.695454   63216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 18:11:55.695486   63216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 18:11:55.695562   63216 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-079123 san=[127.0.0.1 192.168.39.246 localhost minikube old-k8s-version-079123]
	I0819 18:11:55.869810   63216 provision.go:177] copyRemoteCerts
	I0819 18:11:55.869867   63216 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:11:55.869891   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:11:55.872450   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.872786   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:55.872830   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:55.873074   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:11:55.873253   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:55.873446   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:11:55.873571   63216 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa Username:docker}
	I0819 18:11:55.953958   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:11:55.977165   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 18:11:56.000080   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:11:56.024070   63216 provision.go:87] duration metric: took 334.861961ms to configureAuth
	I0819 18:11:56.024096   63216 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:11:56.024264   63216 config.go:182] Loaded profile config "old-k8s-version-079123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 18:11:56.024329   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:11:56.026749   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.027163   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:56.027189   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.027386   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:11:56.027631   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:56.027777   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:56.027914   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:11:56.028061   63216 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:56.028239   63216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:11:56.028259   63216 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:11:56.286038   63216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:11:56.286064   63216 machine.go:96] duration metric: took 956.075427ms to provisionDockerMachine
	I0819 18:11:56.286075   63216 start.go:293] postStartSetup for "old-k8s-version-079123" (driver="kvm2")
	I0819 18:11:56.286084   63216 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:11:56.286110   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:11:56.286463   63216 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:11:56.286490   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:11:56.289581   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.289981   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:56.290011   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.290201   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:11:56.290381   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:56.290533   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:11:56.290653   63216 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa Username:docker}
	I0819 18:11:56.375215   63216 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:11:56.379285   63216 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:11:56.379308   63216 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 18:11:56.379387   63216 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 18:11:56.379491   63216 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 18:11:56.379614   63216 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:11:56.388498   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:11:56.410516   63216 start.go:296] duration metric: took 124.429856ms for postStartSetup
	I0819 18:11:56.410554   63216 fix.go:56] duration metric: took 19.70893892s for fixHost
	I0819 18:11:56.410573   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:11:56.413470   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.413815   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:56.413851   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.414020   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:11:56.414239   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:56.414400   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:56.414605   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:11:56.414807   63216 main.go:141] libmachine: Using SSH client type: native
	I0819 18:11:56.414985   63216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0819 18:11:56.414996   63216 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:11:56.521365   63216 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724091116.499116840
	
	I0819 18:11:56.521391   63216 fix.go:216] guest clock: 1724091116.499116840
	I0819 18:11:56.521401   63216 fix.go:229] Guest: 2024-08-19 18:11:56.49911684 +0000 UTC Remote: 2024-08-19 18:11:56.410557576 +0000 UTC m=+211.688353726 (delta=88.559264ms)
	I0819 18:11:56.521421   63216 fix.go:200] guest clock delta is within tolerance: 88.559264ms
	I0819 18:11:56.521426   63216 start.go:83] releasing machines lock for "old-k8s-version-079123", held for 19.819850252s
	I0819 18:11:56.521447   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:11:56.521723   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetIP
	I0819 18:11:56.524882   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.525397   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:56.525435   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.525525   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:11:56.526071   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:11:56.526252   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .DriverName
	I0819 18:11:56.526339   63216 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:11:56.526387   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:11:56.526518   63216 ssh_runner.go:195] Run: cat /version.json
	I0819 18:11:56.526555   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHHostname
	I0819 18:11:56.529189   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.529504   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.529572   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:56.529608   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.529872   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:56.529928   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:11:56.529956   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:56.530050   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHPort
	I0819 18:11:56.530139   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:56.530205   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHKeyPath
	I0819 18:11:56.530279   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:11:56.530409   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetSSHUsername
	I0819 18:11:56.530416   63216 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa Username:docker}
	I0819 18:11:56.530520   63216 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/old-k8s-version-079123/id_rsa Username:docker}
	I0819 18:11:56.610689   63216 ssh_runner.go:195] Run: systemctl --version
	I0819 18:11:56.649551   63216 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:11:56.790535   63216 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:11:56.796638   63216 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:11:56.796705   63216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:11:56.811414   63216 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:11:56.811437   63216 start.go:495] detecting cgroup driver to use...
	I0819 18:11:56.811499   63216 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:11:56.826139   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:11:56.845901   63216 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:11:56.845974   63216 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:11:56.861016   63216 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:11:56.875103   63216 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:11:56.996344   63216 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:11:57.181991   63216 docker.go:233] disabling docker service ...
	I0819 18:11:57.182080   63216 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:11:57.203243   63216 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:11:57.222353   63216 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:11:57.359888   63216 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:11:57.498326   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:11:57.511737   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:11:57.532368   63216 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0819 18:11:57.532432   63216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:11:57.545604   63216 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:11:57.545670   63216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:11:57.556273   63216 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:11:57.567065   63216 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:11:57.577585   63216 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:11:57.588492   63216 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:11:57.598055   63216 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:11:57.598117   63216 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:11:57.611639   63216 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:11:57.621291   63216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:11:57.766685   63216 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:11:57.915166   63216 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:11:57.915257   63216 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:11:57.919668   63216 start.go:563] Will wait 60s for crictl version
	I0819 18:11:57.919735   63216 ssh_runner.go:195] Run: which crictl
	I0819 18:11:57.923336   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:11:57.972032   63216 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:11:57.972131   63216 ssh_runner.go:195] Run: crio --version
	I0819 18:11:58.005157   63216 ssh_runner.go:195] Run: crio --version
	I0819 18:11:58.034059   63216 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0819 18:11:58.035407   63216 main.go:141] libmachine: (old-k8s-version-079123) Calling .GetIP
	I0819 18:11:58.038717   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:58.039133   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:ce:99", ip: ""} in network mk-old-k8s-version-079123: {Iface:virbr1 ExpiryTime:2024-08-19 19:11:47 +0000 UTC Type:0 Mac:52:54:00:97:ce:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:old-k8s-version-079123 Clientid:01:52:54:00:97:ce:99}
	I0819 18:11:58.039164   63216 main.go:141] libmachine: (old-k8s-version-079123) DBG | domain old-k8s-version-079123 has defined IP address 192.168.39.246 and MAC address 52:54:00:97:ce:99 in network mk-old-k8s-version-079123
	I0819 18:11:58.039415   63216 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:11:58.043680   63216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:11:58.056191   63216 kubeadm.go:883] updating cluster {Name:old-k8s-version-079123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-079123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:11:58.056307   63216 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 18:11:58.056352   63216 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:11:58.099298   63216 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 18:11:58.099359   63216 ssh_runner.go:195] Run: which lz4
	I0819 18:11:58.103169   63216 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:11:58.107369   63216 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:11:58.107396   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0819 18:11:59.588032   63216 crio.go:462] duration metric: took 1.484889199s to copy over tarball
	I0819 18:11:59.588106   63216 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:12:02.488101   63216 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.899961886s)
	I0819 18:12:02.488140   63216 crio.go:469] duration metric: took 2.900075464s to extract the tarball
	I0819 18:12:02.488149   63216 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:12:02.530785   63216 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:12:02.567396   63216 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0819 18:12:02.567417   63216 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 18:12:02.567486   63216 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:12:02.567517   63216 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:12:02.567537   63216 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0819 18:12:02.567544   63216 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:12:02.567572   63216 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0819 18:12:02.567617   63216 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:12:02.567692   63216 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:12:02.567525   63216 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0819 18:12:02.569594   63216 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0819 18:12:02.569617   63216 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0819 18:12:02.569615   63216 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:12:02.569610   63216 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:12:02.569595   63216 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:12:02.569708   63216 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0819 18:12:02.569831   63216 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:12:02.570012   63216 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:12:02.819910   63216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:12:02.833042   63216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0819 18:12:02.873237   63216 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0819 18:12:02.873300   63216 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:12:02.873371   63216 ssh_runner.go:195] Run: which crictl
	I0819 18:12:02.888986   63216 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0819 18:12:02.889009   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:12:02.889029   63216 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0819 18:12:02.889077   63216 ssh_runner.go:195] Run: which crictl
	I0819 18:12:02.890392   63216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:12:02.903689   63216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:12:02.906406   63216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:12:02.947833   63216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0819 18:12:02.950489   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 18:12:02.950549   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:12:02.952535   63216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0819 18:12:03.004346   63216 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0819 18:12:03.004410   63216 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:12:03.004463   63216 ssh_runner.go:195] Run: which crictl
	I0819 18:12:03.013818   63216 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0819 18:12:03.013861   63216 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:12:03.013902   63216 ssh_runner.go:195] Run: which crictl
	I0819 18:12:03.037258   63216 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0819 18:12:03.037306   63216 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:12:03.037358   63216 ssh_runner.go:195] Run: which crictl
	I0819 18:12:03.088444   63216 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0819 18:12:03.088497   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 18:12:03.088448   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0819 18:12:03.088510   63216 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0819 18:12:03.088500   63216 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0819 18:12:03.088539   63216 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0819 18:12:03.088545   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:12:03.088571   63216 ssh_runner.go:195] Run: which crictl
	I0819 18:12:03.088578   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:12:03.088541   63216 ssh_runner.go:195] Run: which crictl
	I0819 18:12:03.088630   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:12:03.208113   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0819 18:12:03.208127   63216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0819 18:12:03.208138   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:12:03.208225   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 18:12:03.208245   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:12:03.208356   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 18:12:03.208391   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:12:03.339176   63216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0819 18:12:03.339185   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0819 18:12:03.339281   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0819 18:12:03.339351   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 18:12:03.339437   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 18:12:03.339466   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0819 18:12:03.439562   63216 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:12:03.443187   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0819 18:12:03.443616   63216 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0819 18:12:03.514271   63216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0819 18:12:03.514271   63216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0819 18:12:03.514271   63216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0819 18:12:03.623825   63216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0819 18:12:03.623832   63216 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0819 18:12:03.623908   63216 cache_images.go:92] duration metric: took 1.056474843s to LoadCachedImages
	W0819 18:12:03.623984   63216 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19478-10654/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0819 18:12:03.623999   63216 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.20.0 crio true true} ...
	I0819 18:12:03.624142   63216 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-079123 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-079123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:12:03.624250   63216 ssh_runner.go:195] Run: crio config
	I0819 18:12:03.681707   63216 cni.go:84] Creating CNI manager for ""
	I0819 18:12:03.681733   63216 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:12:03.681750   63216 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:12:03.681775   63216 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-079123 NodeName:old-k8s-version-079123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 18:12:03.681943   63216 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-079123"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:12:03.682017   63216 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 18:12:03.691970   63216 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:12:03.692025   63216 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:12:03.701011   63216 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0819 18:12:03.716696   63216 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:12:03.732002   63216 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0819 18:12:03.749538   63216 ssh_runner.go:195] Run: grep 192.168.39.246	control-plane.minikube.internal$ /etc/hosts
	I0819 18:12:03.753053   63216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:12:03.765512   63216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:12:03.899998   63216 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:12:03.920029   63216 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123 for IP: 192.168.39.246
	I0819 18:12:03.920052   63216 certs.go:194] generating shared ca certs ...
	I0819 18:12:03.920067   63216 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:12:03.920230   63216 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 18:12:03.920282   63216 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 18:12:03.920292   63216 certs.go:256] generating profile certs ...
	I0819 18:12:03.920389   63216 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.key
	I0819 18:12:03.920462   63216 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.key.9240b1b2
	I0819 18:12:03.920500   63216 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.key
	I0819 18:12:03.920633   63216 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 18:12:03.920681   63216 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 18:12:03.920694   63216 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:12:03.920724   63216 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:12:03.920790   63216 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:12:03.920831   63216 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 18:12:03.920904   63216 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:12:03.921550   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:12:03.968764   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:12:03.997595   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:12:04.029368   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:12:04.056323   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 18:12:04.086470   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:12:04.118602   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:12:04.147885   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:12:04.189478   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 18:12:04.215740   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 18:12:04.242681   63216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:12:04.266397   63216 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:12:04.283673   63216 ssh_runner.go:195] Run: openssl version
	I0819 18:12:04.289733   63216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 18:12:04.300370   63216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 18:12:04.304606   63216 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 18:12:04.304686   63216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 18:12:04.310513   63216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 18:12:04.324442   63216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 18:12:04.338713   63216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 18:12:04.343958   63216 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 18:12:04.344012   63216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 18:12:04.350296   63216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:12:04.361561   63216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:12:04.371515   63216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:12:04.375575   63216 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:12:04.375647   63216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:12:04.380966   63216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:12:04.390613   63216 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:12:04.394639   63216 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:12:04.400321   63216 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:12:04.406048   63216 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:12:04.411723   63216 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:12:04.417082   63216 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:12:04.422441   63216 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:12:04.427677   63216 kubeadm.go:392] StartCluster: {Name:old-k8s-version-079123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-079123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:12:04.427776   63216 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:12:04.427851   63216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:12:04.468806   63216 cri.go:89] found id: ""
	I0819 18:12:04.468877   63216 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:12:04.478676   63216 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 18:12:04.478704   63216 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 18:12:04.478762   63216 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 18:12:04.488220   63216 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:12:04.488994   63216 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-079123" does not appear in /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:12:04.489367   63216 kubeconfig.go:62] /home/jenkins/minikube-integration/19478-10654/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-079123" cluster setting kubeconfig missing "old-k8s-version-079123" context setting]
	I0819 18:12:04.490116   63216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:12:04.563760   63216 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 18:12:04.574391   63216 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.246
	I0819 18:12:04.574429   63216 kubeadm.go:1160] stopping kube-system containers ...
	I0819 18:12:04.574443   63216 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 18:12:04.574498   63216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:12:04.614509   63216 cri.go:89] found id: ""
	I0819 18:12:04.614583   63216 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 18:12:04.638147   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:12:04.648569   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:12:04.648590   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:12:04.648641   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:12:04.660054   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:12:04.660117   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:12:04.672118   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:12:04.683426   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:12:04.683477   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:12:04.692441   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:12:04.701114   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:12:04.701180   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:12:04.710230   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:12:04.719353   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:12:04.719405   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:12:04.728644   63216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:12:04.738004   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:12:04.931561   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:12:05.907658   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:12:06.129900   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:12:06.225350   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:12:06.321882   63216 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:12:06.321974   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:06.822443   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:07.322180   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:07.822873   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:08.323005   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:08.822861   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:09.322900   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:09.822882   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:10.322309   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:10.822248   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:11.322115   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:11.823038   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:12.322978   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:12.822398   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:13.322088   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:13.822091   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:14.322760   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:14.822917   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:15.322871   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:15.822119   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:16.322925   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:16.822347   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:17.322090   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:17.822068   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:18.322819   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:18.822552   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:19.322803   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:19.822479   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:20.322389   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:20.822191   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:21.322255   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:21.822782   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:22.322144   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:22.822618   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:23.322807   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:23.822395   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:24.322349   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:24.823081   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:25.322679   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:25.822097   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:26.322924   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:26.823073   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:27.322820   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:27.822767   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:28.322830   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:28.822931   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:29.322742   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:29.822980   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:30.322422   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:30.823076   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:31.322837   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:31.822331   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:32.322159   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:32.822752   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:33.322246   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:33.822211   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:34.322207   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:34.822367   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:35.323005   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:35.822280   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:36.322648   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:36.822672   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:37.322787   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:37.822168   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:38.322502   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:38.822965   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:39.322110   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:39.822929   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:40.322232   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:40.822340   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:41.322122   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:41.822084   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:42.322619   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:42.823066   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:43.322123   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:43.822944   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:44.322263   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:44.822851   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:45.322039   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:45.822068   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:46.322019   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:46.822647   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:47.322386   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:47.822253   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:48.322249   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:48.822260   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:49.322795   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:49.822607   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:50.322655   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:50.822892   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:51.322370   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:51.822723   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:52.322126   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:52.822713   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:53.322076   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:53.822933   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:54.322339   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:54.822562   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:55.322782   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:55.823000   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:56.322994   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:56.823039   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:57.323004   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:57.822887   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:58.322419   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:58.822661   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:59.322845   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:12:59.822049   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:00.322286   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:00.822289   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:01.322473   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:01.822858   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:02.322397   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:02.822752   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:03.322704   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:03.822096   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:04.322244   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:04.822332   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:05.322335   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:05.823050   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:06.322451   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:06.322529   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:06.359087   63216 cri.go:89] found id: ""
	I0819 18:13:06.359121   63216 logs.go:276] 0 containers: []
	W0819 18:13:06.359131   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:06.359138   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:06.359204   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:06.392578   63216 cri.go:89] found id: ""
	I0819 18:13:06.392608   63216 logs.go:276] 0 containers: []
	W0819 18:13:06.392620   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:06.392626   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:06.392681   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:06.426135   63216 cri.go:89] found id: ""
	I0819 18:13:06.426161   63216 logs.go:276] 0 containers: []
	W0819 18:13:06.426171   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:06.426178   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:06.426248   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:06.460826   63216 cri.go:89] found id: ""
	I0819 18:13:06.460857   63216 logs.go:276] 0 containers: []
	W0819 18:13:06.460868   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:06.460876   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:06.460943   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:06.494645   63216 cri.go:89] found id: ""
	I0819 18:13:06.494676   63216 logs.go:276] 0 containers: []
	W0819 18:13:06.494686   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:06.494694   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:06.494754   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:06.526566   63216 cri.go:89] found id: ""
	I0819 18:13:06.526604   63216 logs.go:276] 0 containers: []
	W0819 18:13:06.526616   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:06.526624   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:06.526682   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:06.560918   63216 cri.go:89] found id: ""
	I0819 18:13:06.560944   63216 logs.go:276] 0 containers: []
	W0819 18:13:06.560954   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:06.560962   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:06.561022   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:06.600324   63216 cri.go:89] found id: ""
	I0819 18:13:06.600351   63216 logs.go:276] 0 containers: []
	W0819 18:13:06.600361   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:06.600372   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:06.600387   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:06.675977   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:06.676008   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:06.689229   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:06.689257   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:06.818329   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:06.818350   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:06.818365   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:06.898684   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:06.898729   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:09.437742   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:09.451716   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:09.451777   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:09.488020   63216 cri.go:89] found id: ""
	I0819 18:13:09.488050   63216 logs.go:276] 0 containers: []
	W0819 18:13:09.488062   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:09.488070   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:09.488127   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:09.520499   63216 cri.go:89] found id: ""
	I0819 18:13:09.520525   63216 logs.go:276] 0 containers: []
	W0819 18:13:09.520535   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:09.520542   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:09.520610   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:09.552953   63216 cri.go:89] found id: ""
	I0819 18:13:09.552982   63216 logs.go:276] 0 containers: []
	W0819 18:13:09.552994   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:09.553001   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:09.553049   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:09.586723   63216 cri.go:89] found id: ""
	I0819 18:13:09.586748   63216 logs.go:276] 0 containers: []
	W0819 18:13:09.586758   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:09.586765   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:09.586828   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:09.625706   63216 cri.go:89] found id: ""
	I0819 18:13:09.625730   63216 logs.go:276] 0 containers: []
	W0819 18:13:09.625738   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:09.625744   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:09.625795   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:09.665468   63216 cri.go:89] found id: ""
	I0819 18:13:09.665513   63216 logs.go:276] 0 containers: []
	W0819 18:13:09.665524   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:09.665532   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:09.665599   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:09.700302   63216 cri.go:89] found id: ""
	I0819 18:13:09.700327   63216 logs.go:276] 0 containers: []
	W0819 18:13:09.700338   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:09.700346   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:09.700408   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:09.736668   63216 cri.go:89] found id: ""
	I0819 18:13:09.736696   63216 logs.go:276] 0 containers: []
	W0819 18:13:09.736707   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:09.736717   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:09.736731   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:09.811569   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:09.811604   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:09.850115   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:09.850156   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:09.907896   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:09.907957   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:09.942066   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:09.942092   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:10.067408   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:12.568094   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:12.581264   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:12.581349   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:12.614077   63216 cri.go:89] found id: ""
	I0819 18:13:12.614110   63216 logs.go:276] 0 containers: []
	W0819 18:13:12.614122   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:12.614130   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:12.614192   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:12.650988   63216 cri.go:89] found id: ""
	I0819 18:13:12.651016   63216 logs.go:276] 0 containers: []
	W0819 18:13:12.651026   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:12.651033   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:12.651094   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:12.684400   63216 cri.go:89] found id: ""
	I0819 18:13:12.684432   63216 logs.go:276] 0 containers: []
	W0819 18:13:12.684451   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:12.684461   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:12.684529   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:12.725877   63216 cri.go:89] found id: ""
	I0819 18:13:12.725911   63216 logs.go:276] 0 containers: []
	W0819 18:13:12.725923   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:12.725930   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:12.725983   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:12.756884   63216 cri.go:89] found id: ""
	I0819 18:13:12.756910   63216 logs.go:276] 0 containers: []
	W0819 18:13:12.756920   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:12.756927   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:12.756990   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:12.792868   63216 cri.go:89] found id: ""
	I0819 18:13:12.792891   63216 logs.go:276] 0 containers: []
	W0819 18:13:12.792901   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:12.792909   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:12.792969   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:12.829470   63216 cri.go:89] found id: ""
	I0819 18:13:12.829493   63216 logs.go:276] 0 containers: []
	W0819 18:13:12.829503   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:12.829509   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:12.829559   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:12.865987   63216 cri.go:89] found id: ""
	I0819 18:13:12.866016   63216 logs.go:276] 0 containers: []
	W0819 18:13:12.866026   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:12.866044   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:12.866059   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:12.919509   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:12.919549   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:12.932561   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:12.932595   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:13.006557   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:13.006578   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:13.006592   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:13.098746   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:13.098782   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:15.640259   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:15.655740   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:15.655799   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:15.694078   63216 cri.go:89] found id: ""
	I0819 18:13:15.694108   63216 logs.go:276] 0 containers: []
	W0819 18:13:15.694119   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:15.694135   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:15.694198   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:15.729202   63216 cri.go:89] found id: ""
	I0819 18:13:15.729224   63216 logs.go:276] 0 containers: []
	W0819 18:13:15.729231   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:15.729237   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:15.729289   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:15.767282   63216 cri.go:89] found id: ""
	I0819 18:13:15.767313   63216 logs.go:276] 0 containers: []
	W0819 18:13:15.767324   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:15.767331   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:15.767392   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:15.812821   63216 cri.go:89] found id: ""
	I0819 18:13:15.812846   63216 logs.go:276] 0 containers: []
	W0819 18:13:15.812854   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:15.812862   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:15.812917   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:15.845143   63216 cri.go:89] found id: ""
	I0819 18:13:15.845167   63216 logs.go:276] 0 containers: []
	W0819 18:13:15.845174   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:15.845181   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:15.845229   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:15.877880   63216 cri.go:89] found id: ""
	I0819 18:13:15.877907   63216 logs.go:276] 0 containers: []
	W0819 18:13:15.877917   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:15.877923   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:15.877973   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:15.910644   63216 cri.go:89] found id: ""
	I0819 18:13:15.910669   63216 logs.go:276] 0 containers: []
	W0819 18:13:15.910677   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:15.910683   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:15.910729   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:15.943233   63216 cri.go:89] found id: ""
	I0819 18:13:15.943258   63216 logs.go:276] 0 containers: []
	W0819 18:13:15.943266   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:15.943279   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:15.943290   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:16.019409   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:16.019431   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:16.019443   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:16.098851   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:16.098888   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:16.133631   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:16.133665   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:16.184712   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:16.184744   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:18.698204   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:18.711115   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:18.711179   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:18.742736   63216 cri.go:89] found id: ""
	I0819 18:13:18.742761   63216 logs.go:276] 0 containers: []
	W0819 18:13:18.742773   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:18.742779   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:18.742841   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:18.774725   63216 cri.go:89] found id: ""
	I0819 18:13:18.774756   63216 logs.go:276] 0 containers: []
	W0819 18:13:18.774766   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:18.774774   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:18.774836   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:18.808681   63216 cri.go:89] found id: ""
	I0819 18:13:18.808710   63216 logs.go:276] 0 containers: []
	W0819 18:13:18.808719   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:18.808725   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:18.808796   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:18.841605   63216 cri.go:89] found id: ""
	I0819 18:13:18.841650   63216 logs.go:276] 0 containers: []
	W0819 18:13:18.841661   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:18.841669   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:18.841729   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:18.882155   63216 cri.go:89] found id: ""
	I0819 18:13:18.882186   63216 logs.go:276] 0 containers: []
	W0819 18:13:18.882197   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:18.882205   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:18.882267   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:18.916193   63216 cri.go:89] found id: ""
	I0819 18:13:18.916221   63216 logs.go:276] 0 containers: []
	W0819 18:13:18.916229   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:18.916235   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:18.916282   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:18.951203   63216 cri.go:89] found id: ""
	I0819 18:13:18.951234   63216 logs.go:276] 0 containers: []
	W0819 18:13:18.951246   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:18.951254   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:18.951314   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:18.985851   63216 cri.go:89] found id: ""
	I0819 18:13:18.985883   63216 logs.go:276] 0 containers: []
	W0819 18:13:18.985894   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:18.985906   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:18.985919   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:19.059954   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:19.059992   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:19.098414   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:19.098446   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:19.150127   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:19.150164   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:19.163023   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:19.163052   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:19.234490   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:21.735689   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:21.757353   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:21.757422   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:21.790799   63216 cri.go:89] found id: ""
	I0819 18:13:21.790827   63216 logs.go:276] 0 containers: []
	W0819 18:13:21.790841   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:21.790849   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:21.790913   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:21.827001   63216 cri.go:89] found id: ""
	I0819 18:13:21.827040   63216 logs.go:276] 0 containers: []
	W0819 18:13:21.827048   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:21.827054   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:21.827102   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:21.860386   63216 cri.go:89] found id: ""
	I0819 18:13:21.860413   63216 logs.go:276] 0 containers: []
	W0819 18:13:21.860422   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:21.860429   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:21.860475   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:21.898112   63216 cri.go:89] found id: ""
	I0819 18:13:21.898137   63216 logs.go:276] 0 containers: []
	W0819 18:13:21.898144   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:21.898150   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:21.898197   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:21.931118   63216 cri.go:89] found id: ""
	I0819 18:13:21.931142   63216 logs.go:276] 0 containers: []
	W0819 18:13:21.931149   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:21.931154   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:21.931200   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:21.964960   63216 cri.go:89] found id: ""
	I0819 18:13:21.964992   63216 logs.go:276] 0 containers: []
	W0819 18:13:21.965003   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:21.965012   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:21.965082   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:22.000608   63216 cri.go:89] found id: ""
	I0819 18:13:22.000632   63216 logs.go:276] 0 containers: []
	W0819 18:13:22.000640   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:22.000645   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:22.000692   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:22.032492   63216 cri.go:89] found id: ""
	I0819 18:13:22.032517   63216 logs.go:276] 0 containers: []
	W0819 18:13:22.032524   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:22.032533   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:22.032544   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:22.068762   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:22.068793   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:22.122263   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:22.122299   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:22.135217   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:22.135276   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:22.204445   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:22.204468   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:22.204483   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:24.780878   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:24.793612   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:24.793683   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:24.824183   63216 cri.go:89] found id: ""
	I0819 18:13:24.824212   63216 logs.go:276] 0 containers: []
	W0819 18:13:24.824220   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:24.824226   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:24.824293   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:24.855515   63216 cri.go:89] found id: ""
	I0819 18:13:24.855545   63216 logs.go:276] 0 containers: []
	W0819 18:13:24.855556   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:24.855563   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:24.855624   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:24.891006   63216 cri.go:89] found id: ""
	I0819 18:13:24.891032   63216 logs.go:276] 0 containers: []
	W0819 18:13:24.891038   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:24.891043   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:24.891104   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:24.925708   63216 cri.go:89] found id: ""
	I0819 18:13:24.925737   63216 logs.go:276] 0 containers: []
	W0819 18:13:24.925744   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:24.925750   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:24.925796   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:24.958791   63216 cri.go:89] found id: ""
	I0819 18:13:24.958818   63216 logs.go:276] 0 containers: []
	W0819 18:13:24.958824   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:24.958831   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:24.958874   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:24.993681   63216 cri.go:89] found id: ""
	I0819 18:13:24.993706   63216 logs.go:276] 0 containers: []
	W0819 18:13:24.993716   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:24.993724   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:24.993780   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:25.030532   63216 cri.go:89] found id: ""
	I0819 18:13:25.030563   63216 logs.go:276] 0 containers: []
	W0819 18:13:25.030573   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:25.030582   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:25.030674   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:25.065032   63216 cri.go:89] found id: ""
	I0819 18:13:25.065057   63216 logs.go:276] 0 containers: []
	W0819 18:13:25.065067   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:25.065077   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:25.065091   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:25.117371   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:25.117408   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:25.130159   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:25.130188   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:25.200536   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:25.200573   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:25.200593   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:25.276653   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:25.276704   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:27.820874   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:27.833475   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:27.833542   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:27.869514   63216 cri.go:89] found id: ""
	I0819 18:13:27.869550   63216 logs.go:276] 0 containers: []
	W0819 18:13:27.869562   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:27.869570   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:27.869632   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:27.901097   63216 cri.go:89] found id: ""
	I0819 18:13:27.901120   63216 logs.go:276] 0 containers: []
	W0819 18:13:27.901127   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:27.901132   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:27.901176   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:27.935762   63216 cri.go:89] found id: ""
	I0819 18:13:27.935788   63216 logs.go:276] 0 containers: []
	W0819 18:13:27.935795   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:27.935800   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:27.935858   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:27.971766   63216 cri.go:89] found id: ""
	I0819 18:13:27.971790   63216 logs.go:276] 0 containers: []
	W0819 18:13:27.971798   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:27.971803   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:27.971858   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:28.006241   63216 cri.go:89] found id: ""
	I0819 18:13:28.006265   63216 logs.go:276] 0 containers: []
	W0819 18:13:28.006273   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:28.006278   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:28.006325   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:28.037675   63216 cri.go:89] found id: ""
	I0819 18:13:28.037702   63216 logs.go:276] 0 containers: []
	W0819 18:13:28.037710   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:28.037716   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:28.037769   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:28.070448   63216 cri.go:89] found id: ""
	I0819 18:13:28.070471   63216 logs.go:276] 0 containers: []
	W0819 18:13:28.070479   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:28.070485   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:28.070533   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:28.103712   63216 cri.go:89] found id: ""
	I0819 18:13:28.103736   63216 logs.go:276] 0 containers: []
	W0819 18:13:28.103756   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:28.103768   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:28.103798   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:28.186888   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:28.186924   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:28.223200   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:28.223226   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:28.272643   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:28.272679   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:28.285915   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:28.285947   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:28.365502   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:30.865923   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:30.878733   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:30.878794   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:30.914510   63216 cri.go:89] found id: ""
	I0819 18:13:30.914535   63216 logs.go:276] 0 containers: []
	W0819 18:13:30.914543   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:30.914549   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:30.914602   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:30.950847   63216 cri.go:89] found id: ""
	I0819 18:13:30.950873   63216 logs.go:276] 0 containers: []
	W0819 18:13:30.950882   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:30.950888   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:30.950934   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:30.982452   63216 cri.go:89] found id: ""
	I0819 18:13:30.982485   63216 logs.go:276] 0 containers: []
	W0819 18:13:30.982492   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:30.982498   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:30.982543   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:31.016038   63216 cri.go:89] found id: ""
	I0819 18:13:31.016071   63216 logs.go:276] 0 containers: []
	W0819 18:13:31.016082   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:31.016089   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:31.016153   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:31.047942   63216 cri.go:89] found id: ""
	I0819 18:13:31.047973   63216 logs.go:276] 0 containers: []
	W0819 18:13:31.047984   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:31.047991   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:31.048053   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:31.084145   63216 cri.go:89] found id: ""
	I0819 18:13:31.084178   63216 logs.go:276] 0 containers: []
	W0819 18:13:31.084185   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:31.084191   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:31.084251   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:31.119738   63216 cri.go:89] found id: ""
	I0819 18:13:31.119767   63216 logs.go:276] 0 containers: []
	W0819 18:13:31.119775   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:31.119781   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:31.119828   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:31.151738   63216 cri.go:89] found id: ""
	I0819 18:13:31.151771   63216 logs.go:276] 0 containers: []
	W0819 18:13:31.151783   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:31.151796   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:31.151811   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:31.204685   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:31.204726   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:31.218169   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:31.218197   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:31.280419   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:31.280446   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:31.280461   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:31.355214   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:31.355251   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:33.892194   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:33.904201   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:33.904279   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:33.935656   63216 cri.go:89] found id: ""
	I0819 18:13:33.935684   63216 logs.go:276] 0 containers: []
	W0819 18:13:33.935693   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:33.935698   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:33.935758   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:33.968266   63216 cri.go:89] found id: ""
	I0819 18:13:33.968300   63216 logs.go:276] 0 containers: []
	W0819 18:13:33.968329   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:33.968339   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:33.968406   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:33.999562   63216 cri.go:89] found id: ""
	I0819 18:13:33.999595   63216 logs.go:276] 0 containers: []
	W0819 18:13:33.999606   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:33.999613   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:33.999676   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:34.033819   63216 cri.go:89] found id: ""
	I0819 18:13:34.033848   63216 logs.go:276] 0 containers: []
	W0819 18:13:34.033859   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:34.033866   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:34.033927   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:34.066439   63216 cri.go:89] found id: ""
	I0819 18:13:34.066471   63216 logs.go:276] 0 containers: []
	W0819 18:13:34.066482   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:34.066491   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:34.066557   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:34.102118   63216 cri.go:89] found id: ""
	I0819 18:13:34.102149   63216 logs.go:276] 0 containers: []
	W0819 18:13:34.102156   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:34.102162   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:34.102219   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:34.137949   63216 cri.go:89] found id: ""
	I0819 18:13:34.137976   63216 logs.go:276] 0 containers: []
	W0819 18:13:34.137987   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:34.137994   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:34.138056   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:34.174701   63216 cri.go:89] found id: ""
	I0819 18:13:34.174723   63216 logs.go:276] 0 containers: []
	W0819 18:13:34.174736   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:34.174747   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:34.174762   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:34.246325   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:34.246343   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:34.246356   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:34.323673   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:34.323711   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:34.363447   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:34.363481   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:34.412917   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:34.412955   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:36.926419   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:36.940147   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:36.940231   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:36.974813   63216 cri.go:89] found id: ""
	I0819 18:13:36.974841   63216 logs.go:276] 0 containers: []
	W0819 18:13:36.974854   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:36.974862   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:36.974931   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:37.008714   63216 cri.go:89] found id: ""
	I0819 18:13:37.008740   63216 logs.go:276] 0 containers: []
	W0819 18:13:37.008769   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:37.008778   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:37.008829   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:37.042161   63216 cri.go:89] found id: ""
	I0819 18:13:37.042187   63216 logs.go:276] 0 containers: []
	W0819 18:13:37.042194   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:37.042200   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:37.042251   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:37.076461   63216 cri.go:89] found id: ""
	I0819 18:13:37.076491   63216 logs.go:276] 0 containers: []
	W0819 18:13:37.076500   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:37.076506   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:37.076553   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:37.108470   63216 cri.go:89] found id: ""
	I0819 18:13:37.108506   63216 logs.go:276] 0 containers: []
	W0819 18:13:37.108517   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:37.108524   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:37.108593   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:37.142694   63216 cri.go:89] found id: ""
	I0819 18:13:37.142727   63216 logs.go:276] 0 containers: []
	W0819 18:13:37.142737   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:37.142745   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:37.142808   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:37.175756   63216 cri.go:89] found id: ""
	I0819 18:13:37.175786   63216 logs.go:276] 0 containers: []
	W0819 18:13:37.175795   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:37.175802   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:37.175860   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:37.211900   63216 cri.go:89] found id: ""
	I0819 18:13:37.211927   63216 logs.go:276] 0 containers: []
	W0819 18:13:37.211936   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:37.211948   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:37.211963   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:37.263603   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:37.263644   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:37.277059   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:37.277094   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:37.345743   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:37.345764   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:37.345775   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:37.423896   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:37.423930   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:39.961962   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:39.975066   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:39.975217   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:40.007974   63216 cri.go:89] found id: ""
	I0819 18:13:40.007999   63216 logs.go:276] 0 containers: []
	W0819 18:13:40.008010   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:40.008018   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:40.008085   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:40.039329   63216 cri.go:89] found id: ""
	I0819 18:13:40.039372   63216 logs.go:276] 0 containers: []
	W0819 18:13:40.039382   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:40.039390   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:40.039447   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:40.073113   63216 cri.go:89] found id: ""
	I0819 18:13:40.073149   63216 logs.go:276] 0 containers: []
	W0819 18:13:40.073160   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:40.073168   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:40.073225   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:40.104895   63216 cri.go:89] found id: ""
	I0819 18:13:40.104936   63216 logs.go:276] 0 containers: []
	W0819 18:13:40.104949   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:40.104958   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:40.105026   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:40.146067   63216 cri.go:89] found id: ""
	I0819 18:13:40.146097   63216 logs.go:276] 0 containers: []
	W0819 18:13:40.146107   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:40.146115   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:40.146178   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:40.180782   63216 cri.go:89] found id: ""
	I0819 18:13:40.180816   63216 logs.go:276] 0 containers: []
	W0819 18:13:40.180826   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:40.180834   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:40.180895   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:40.211465   63216 cri.go:89] found id: ""
	I0819 18:13:40.211489   63216 logs.go:276] 0 containers: []
	W0819 18:13:40.211497   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:40.211503   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:40.211562   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:40.243428   63216 cri.go:89] found id: ""
	I0819 18:13:40.243455   63216 logs.go:276] 0 containers: []
	W0819 18:13:40.243463   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:40.243471   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:40.243484   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:40.293540   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:40.293572   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:40.306610   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:40.306640   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:40.379364   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:40.379387   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:40.379403   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:40.459186   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:40.459227   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:42.997539   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:43.010250   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:43.010311   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:43.042188   63216 cri.go:89] found id: ""
	I0819 18:13:43.042220   63216 logs.go:276] 0 containers: []
	W0819 18:13:43.042229   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:43.042237   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:43.042302   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:43.074815   63216 cri.go:89] found id: ""
	I0819 18:13:43.074842   63216 logs.go:276] 0 containers: []
	W0819 18:13:43.074851   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:43.074860   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:43.074917   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:43.107583   63216 cri.go:89] found id: ""
	I0819 18:13:43.107616   63216 logs.go:276] 0 containers: []
	W0819 18:13:43.107623   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:43.107631   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:43.107690   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:43.139506   63216 cri.go:89] found id: ""
	I0819 18:13:43.139532   63216 logs.go:276] 0 containers: []
	W0819 18:13:43.139539   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:43.139545   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:43.139612   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:43.173263   63216 cri.go:89] found id: ""
	I0819 18:13:43.173288   63216 logs.go:276] 0 containers: []
	W0819 18:13:43.173295   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:43.173300   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:43.173359   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:43.205383   63216 cri.go:89] found id: ""
	I0819 18:13:43.205406   63216 logs.go:276] 0 containers: []
	W0819 18:13:43.205413   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:43.205419   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:43.205474   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:43.235731   63216 cri.go:89] found id: ""
	I0819 18:13:43.235757   63216 logs.go:276] 0 containers: []
	W0819 18:13:43.235764   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:43.235771   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:43.235826   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:43.269519   63216 cri.go:89] found id: ""
	I0819 18:13:43.269548   63216 logs.go:276] 0 containers: []
	W0819 18:13:43.269560   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:43.269571   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:43.269591   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:43.321121   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:43.321158   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:43.333944   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:43.333972   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:43.400407   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:43.400429   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:43.400441   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:43.481399   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:43.481444   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:46.021419   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:46.034954   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:46.035020   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:46.068155   63216 cri.go:89] found id: ""
	I0819 18:13:46.068183   63216 logs.go:276] 0 containers: []
	W0819 18:13:46.068196   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:46.068204   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:46.068264   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:46.101245   63216 cri.go:89] found id: ""
	I0819 18:13:46.101278   63216 logs.go:276] 0 containers: []
	W0819 18:13:46.101289   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:46.101296   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:46.101380   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:46.132982   63216 cri.go:89] found id: ""
	I0819 18:13:46.133008   63216 logs.go:276] 0 containers: []
	W0819 18:13:46.133015   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:46.133021   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:46.133079   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:46.165891   63216 cri.go:89] found id: ""
	I0819 18:13:46.165920   63216 logs.go:276] 0 containers: []
	W0819 18:13:46.165927   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:46.165935   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:46.166000   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:46.196728   63216 cri.go:89] found id: ""
	I0819 18:13:46.196770   63216 logs.go:276] 0 containers: []
	W0819 18:13:46.196780   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:46.196787   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:46.196898   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:46.230208   63216 cri.go:89] found id: ""
	I0819 18:13:46.230238   63216 logs.go:276] 0 containers: []
	W0819 18:13:46.230257   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:46.230265   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:46.230359   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:46.262233   63216 cri.go:89] found id: ""
	I0819 18:13:46.262259   63216 logs.go:276] 0 containers: []
	W0819 18:13:46.262269   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:46.262275   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:46.262335   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:46.294934   63216 cri.go:89] found id: ""
	I0819 18:13:46.294966   63216 logs.go:276] 0 containers: []
	W0819 18:13:46.294978   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:46.294989   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:46.295005   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:46.359914   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:46.359939   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:46.359950   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:46.442154   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:46.442190   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:46.487062   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:46.487090   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:46.535507   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:46.535539   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:49.049056   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:49.061413   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:49.061467   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:49.093059   63216 cri.go:89] found id: ""
	I0819 18:13:49.093081   63216 logs.go:276] 0 containers: []
	W0819 18:13:49.093089   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:49.093094   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:49.093148   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:49.125742   63216 cri.go:89] found id: ""
	I0819 18:13:49.125769   63216 logs.go:276] 0 containers: []
	W0819 18:13:49.125778   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:49.125784   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:49.125843   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:49.160971   63216 cri.go:89] found id: ""
	I0819 18:13:49.160998   63216 logs.go:276] 0 containers: []
	W0819 18:13:49.161008   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:49.161015   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:49.161078   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:49.193823   63216 cri.go:89] found id: ""
	I0819 18:13:49.193850   63216 logs.go:276] 0 containers: []
	W0819 18:13:49.193860   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:49.193867   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:49.193927   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:49.227036   63216 cri.go:89] found id: ""
	I0819 18:13:49.227071   63216 logs.go:276] 0 containers: []
	W0819 18:13:49.227085   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:49.227093   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:49.227161   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:49.257812   63216 cri.go:89] found id: ""
	I0819 18:13:49.257845   63216 logs.go:276] 0 containers: []
	W0819 18:13:49.257856   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:49.257866   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:49.257928   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:49.290698   63216 cri.go:89] found id: ""
	I0819 18:13:49.290723   63216 logs.go:276] 0 containers: []
	W0819 18:13:49.290730   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:49.290736   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:49.290782   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:49.319943   63216 cri.go:89] found id: ""
	I0819 18:13:49.319977   63216 logs.go:276] 0 containers: []
	W0819 18:13:49.319988   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:49.319998   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:49.320008   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:49.369843   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:49.369876   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:49.383124   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:49.383151   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:49.448863   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:49.448883   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:49.448898   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:49.533191   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:49.533225   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:52.072715   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:52.085149   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:52.085211   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:52.118141   63216 cri.go:89] found id: ""
	I0819 18:13:52.118171   63216 logs.go:276] 0 containers: []
	W0819 18:13:52.118182   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:52.118189   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:52.118238   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:52.150141   63216 cri.go:89] found id: ""
	I0819 18:13:52.150173   63216 logs.go:276] 0 containers: []
	W0819 18:13:52.150184   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:52.150192   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:52.150252   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:52.182528   63216 cri.go:89] found id: ""
	I0819 18:13:52.182555   63216 logs.go:276] 0 containers: []
	W0819 18:13:52.182563   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:52.182578   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:52.182630   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:52.214414   63216 cri.go:89] found id: ""
	I0819 18:13:52.214438   63216 logs.go:276] 0 containers: []
	W0819 18:13:52.214446   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:52.214452   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:52.214505   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:52.245566   63216 cri.go:89] found id: ""
	I0819 18:13:52.245593   63216 logs.go:276] 0 containers: []
	W0819 18:13:52.245601   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:52.245607   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:52.245662   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:52.278233   63216 cri.go:89] found id: ""
	I0819 18:13:52.278263   63216 logs.go:276] 0 containers: []
	W0819 18:13:52.278275   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:52.278280   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:52.278342   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:52.309981   63216 cri.go:89] found id: ""
	I0819 18:13:52.310006   63216 logs.go:276] 0 containers: []
	W0819 18:13:52.310013   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:52.310018   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:52.310067   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:52.345384   63216 cri.go:89] found id: ""
	I0819 18:13:52.345415   63216 logs.go:276] 0 containers: []
	W0819 18:13:52.345424   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:52.345435   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:52.345451   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:52.382588   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:52.382626   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:52.430190   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:52.430225   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:52.445241   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:52.445267   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:52.515996   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:52.516018   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:52.516032   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:55.095707   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:55.108862   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:55.108941   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:55.144485   63216 cri.go:89] found id: ""
	I0819 18:13:55.144514   63216 logs.go:276] 0 containers: []
	W0819 18:13:55.144524   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:55.144532   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:55.144598   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:55.184409   63216 cri.go:89] found id: ""
	I0819 18:13:55.184436   63216 logs.go:276] 0 containers: []
	W0819 18:13:55.184445   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:55.184452   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:55.184513   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:55.218829   63216 cri.go:89] found id: ""
	I0819 18:13:55.218857   63216 logs.go:276] 0 containers: []
	W0819 18:13:55.218867   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:55.218875   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:55.218935   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:55.254672   63216 cri.go:89] found id: ""
	I0819 18:13:55.254699   63216 logs.go:276] 0 containers: []
	W0819 18:13:55.254708   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:55.254714   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:55.254776   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:55.288573   63216 cri.go:89] found id: ""
	I0819 18:13:55.288603   63216 logs.go:276] 0 containers: []
	W0819 18:13:55.288613   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:55.288621   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:55.288679   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:55.323362   63216 cri.go:89] found id: ""
	I0819 18:13:55.323387   63216 logs.go:276] 0 containers: []
	W0819 18:13:55.323394   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:55.323400   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:55.323449   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:55.356800   63216 cri.go:89] found id: ""
	I0819 18:13:55.356826   63216 logs.go:276] 0 containers: []
	W0819 18:13:55.356835   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:55.356843   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:55.356901   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:55.391144   63216 cri.go:89] found id: ""
	I0819 18:13:55.391175   63216 logs.go:276] 0 containers: []
	W0819 18:13:55.391184   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:55.391193   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:55.391208   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:55.465832   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:55.465868   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:13:55.521688   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:55.521726   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:55.571651   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:55.571686   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:55.586948   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:55.586973   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:55.649622   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:58.150298   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:13:58.163184   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:13:58.163267   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:13:58.195698   63216 cri.go:89] found id: ""
	I0819 18:13:58.195722   63216 logs.go:276] 0 containers: []
	W0819 18:13:58.195729   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:13:58.195736   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:13:58.195794   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:13:58.231547   63216 cri.go:89] found id: ""
	I0819 18:13:58.231584   63216 logs.go:276] 0 containers: []
	W0819 18:13:58.231598   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:13:58.231605   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:13:58.231667   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:13:58.264431   63216 cri.go:89] found id: ""
	I0819 18:13:58.264455   63216 logs.go:276] 0 containers: []
	W0819 18:13:58.264463   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:13:58.264468   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:13:58.264523   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:13:58.296684   63216 cri.go:89] found id: ""
	I0819 18:13:58.296713   63216 logs.go:276] 0 containers: []
	W0819 18:13:58.296722   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:13:58.296735   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:13:58.296820   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:13:58.330841   63216 cri.go:89] found id: ""
	I0819 18:13:58.330872   63216 logs.go:276] 0 containers: []
	W0819 18:13:58.330880   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:13:58.330886   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:13:58.330935   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:13:58.363050   63216 cri.go:89] found id: ""
	I0819 18:13:58.363079   63216 logs.go:276] 0 containers: []
	W0819 18:13:58.363089   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:13:58.363098   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:13:58.363151   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:13:58.398610   63216 cri.go:89] found id: ""
	I0819 18:13:58.398640   63216 logs.go:276] 0 containers: []
	W0819 18:13:58.398651   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:13:58.398659   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:13:58.398727   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:13:58.435325   63216 cri.go:89] found id: ""
	I0819 18:13:58.435354   63216 logs.go:276] 0 containers: []
	W0819 18:13:58.435362   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:13:58.435371   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:13:58.435383   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:13:58.486134   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:13:58.486173   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:13:58.498976   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:13:58.499000   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:13:58.569168   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:13:58.569194   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:13:58.569211   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:13:58.654118   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:13:58.654157   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:01.198807   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:01.211439   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:01.211517   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:01.245217   63216 cri.go:89] found id: ""
	I0819 18:14:01.245241   63216 logs.go:276] 0 containers: []
	W0819 18:14:01.245249   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:01.245255   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:01.245318   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:01.294512   63216 cri.go:89] found id: ""
	I0819 18:14:01.294540   63216 logs.go:276] 0 containers: []
	W0819 18:14:01.294548   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:01.294553   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:01.294620   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:01.326300   63216 cri.go:89] found id: ""
	I0819 18:14:01.326328   63216 logs.go:276] 0 containers: []
	W0819 18:14:01.326339   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:01.326347   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:01.326450   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:01.359702   63216 cri.go:89] found id: ""
	I0819 18:14:01.359728   63216 logs.go:276] 0 containers: []
	W0819 18:14:01.359736   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:01.359742   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:01.359801   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:01.393617   63216 cri.go:89] found id: ""
	I0819 18:14:01.393651   63216 logs.go:276] 0 containers: []
	W0819 18:14:01.393664   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:01.393672   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:01.393737   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:01.428407   63216 cri.go:89] found id: ""
	I0819 18:14:01.428442   63216 logs.go:276] 0 containers: []
	W0819 18:14:01.428453   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:01.428461   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:01.428531   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:01.460782   63216 cri.go:89] found id: ""
	I0819 18:14:01.460811   63216 logs.go:276] 0 containers: []
	W0819 18:14:01.460820   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:01.460826   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:01.460882   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:01.493704   63216 cri.go:89] found id: ""
	I0819 18:14:01.493731   63216 logs.go:276] 0 containers: []
	W0819 18:14:01.493740   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:01.493749   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:01.493761   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:01.561444   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:01.561469   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:01.561483   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:01.640856   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:01.640893   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:01.678634   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:01.678658   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:01.731160   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:01.731191   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:04.246024   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:04.259075   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:04.259149   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:04.293232   63216 cri.go:89] found id: ""
	I0819 18:14:04.293258   63216 logs.go:276] 0 containers: []
	W0819 18:14:04.293266   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:04.293272   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:04.293331   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:04.327768   63216 cri.go:89] found id: ""
	I0819 18:14:04.327797   63216 logs.go:276] 0 containers: []
	W0819 18:14:04.327805   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:04.327812   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:04.327873   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:04.360027   63216 cri.go:89] found id: ""
	I0819 18:14:04.360050   63216 logs.go:276] 0 containers: []
	W0819 18:14:04.360058   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:04.360063   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:04.360119   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:04.393776   63216 cri.go:89] found id: ""
	I0819 18:14:04.393801   63216 logs.go:276] 0 containers: []
	W0819 18:14:04.393808   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:04.393815   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:04.393865   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:04.426422   63216 cri.go:89] found id: ""
	I0819 18:14:04.426447   63216 logs.go:276] 0 containers: []
	W0819 18:14:04.426454   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:04.426459   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:04.426510   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:04.458457   63216 cri.go:89] found id: ""
	I0819 18:14:04.458484   63216 logs.go:276] 0 containers: []
	W0819 18:14:04.458491   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:04.458497   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:04.458551   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:04.493426   63216 cri.go:89] found id: ""
	I0819 18:14:04.493450   63216 logs.go:276] 0 containers: []
	W0819 18:14:04.493458   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:04.493465   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:04.493526   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:04.525387   63216 cri.go:89] found id: ""
	I0819 18:14:04.525419   63216 logs.go:276] 0 containers: []
	W0819 18:14:04.525429   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:04.525440   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:04.525453   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:04.576590   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:04.576626   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:04.591521   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:04.591551   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:04.663686   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:04.663709   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:04.663724   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:04.742427   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:04.742461   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:07.283292   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:07.295479   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:07.295535   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:07.330573   63216 cri.go:89] found id: ""
	I0819 18:14:07.330609   63216 logs.go:276] 0 containers: []
	W0819 18:14:07.330621   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:07.330629   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:07.330679   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:07.364113   63216 cri.go:89] found id: ""
	I0819 18:14:07.364147   63216 logs.go:276] 0 containers: []
	W0819 18:14:07.364159   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:07.364166   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:07.364217   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:07.400050   63216 cri.go:89] found id: ""
	I0819 18:14:07.400079   63216 logs.go:276] 0 containers: []
	W0819 18:14:07.400087   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:07.400093   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:07.400140   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:07.433948   63216 cri.go:89] found id: ""
	I0819 18:14:07.433981   63216 logs.go:276] 0 containers: []
	W0819 18:14:07.433988   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:07.433994   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:07.434046   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:07.467216   63216 cri.go:89] found id: ""
	I0819 18:14:07.467249   63216 logs.go:276] 0 containers: []
	W0819 18:14:07.467266   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:07.467275   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:07.467330   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:07.499323   63216 cri.go:89] found id: ""
	I0819 18:14:07.499349   63216 logs.go:276] 0 containers: []
	W0819 18:14:07.499356   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:07.499362   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:07.499421   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:07.531191   63216 cri.go:89] found id: ""
	I0819 18:14:07.531221   63216 logs.go:276] 0 containers: []
	W0819 18:14:07.531229   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:07.531235   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:07.531300   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:07.565112   63216 cri.go:89] found id: ""
	I0819 18:14:07.565141   63216 logs.go:276] 0 containers: []
	W0819 18:14:07.565152   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:07.565164   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:07.565178   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:07.629577   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:07.629602   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:07.629617   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:07.729283   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:07.729354   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:07.776864   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:07.776904   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:07.828391   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:07.828427   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:10.341418   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:10.354374   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:10.354455   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:10.387446   63216 cri.go:89] found id: ""
	I0819 18:14:10.387476   63216 logs.go:276] 0 containers: []
	W0819 18:14:10.387485   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:10.387494   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:10.387554   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:10.422338   63216 cri.go:89] found id: ""
	I0819 18:14:10.422370   63216 logs.go:276] 0 containers: []
	W0819 18:14:10.422386   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:10.422394   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:10.422450   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:10.455571   63216 cri.go:89] found id: ""
	I0819 18:14:10.455602   63216 logs.go:276] 0 containers: []
	W0819 18:14:10.455610   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:10.455616   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:10.455680   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:10.493915   63216 cri.go:89] found id: ""
	I0819 18:14:10.493946   63216 logs.go:276] 0 containers: []
	W0819 18:14:10.493954   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:10.493960   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:10.494015   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:10.527761   63216 cri.go:89] found id: ""
	I0819 18:14:10.527786   63216 logs.go:276] 0 containers: []
	W0819 18:14:10.527794   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:10.527799   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:10.527855   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:10.561279   63216 cri.go:89] found id: ""
	I0819 18:14:10.561304   63216 logs.go:276] 0 containers: []
	W0819 18:14:10.561312   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:10.561318   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:10.561370   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:10.597561   63216 cri.go:89] found id: ""
	I0819 18:14:10.597592   63216 logs.go:276] 0 containers: []
	W0819 18:14:10.597600   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:10.597605   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:10.597652   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:10.629534   63216 cri.go:89] found id: ""
	I0819 18:14:10.629570   63216 logs.go:276] 0 containers: []
	W0819 18:14:10.629582   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:10.629594   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:10.629609   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:10.679131   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:10.679159   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:10.692017   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:10.692046   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:10.758752   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:10.758778   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:10.758794   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:10.836833   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:10.836868   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:13.379965   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:13.395858   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:13.395915   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:13.434271   63216 cri.go:89] found id: ""
	I0819 18:14:13.434302   63216 logs.go:276] 0 containers: []
	W0819 18:14:13.434310   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:13.434316   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:13.434366   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:13.483127   63216 cri.go:89] found id: ""
	I0819 18:14:13.483156   63216 logs.go:276] 0 containers: []
	W0819 18:14:13.483164   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:13.483172   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:13.483235   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:13.523529   63216 cri.go:89] found id: ""
	I0819 18:14:13.523557   63216 logs.go:276] 0 containers: []
	W0819 18:14:13.523564   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:13.523570   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:13.523624   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:13.555828   63216 cri.go:89] found id: ""
	I0819 18:14:13.555847   63216 logs.go:276] 0 containers: []
	W0819 18:14:13.555855   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:13.555861   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:13.555907   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:13.591591   63216 cri.go:89] found id: ""
	I0819 18:14:13.591613   63216 logs.go:276] 0 containers: []
	W0819 18:14:13.591620   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:13.591626   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:13.591674   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:13.625625   63216 cri.go:89] found id: ""
	I0819 18:14:13.625658   63216 logs.go:276] 0 containers: []
	W0819 18:14:13.625668   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:13.625678   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:13.625743   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:13.659664   63216 cri.go:89] found id: ""
	I0819 18:14:13.659687   63216 logs.go:276] 0 containers: []
	W0819 18:14:13.659695   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:13.659701   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:13.659750   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:13.693204   63216 cri.go:89] found id: ""
	I0819 18:14:13.693238   63216 logs.go:276] 0 containers: []
	W0819 18:14:13.693249   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:13.693261   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:13.693274   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:13.741964   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:13.742000   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:13.755059   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:13.755087   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:13.828253   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:13.828270   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:13.828283   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:13.905742   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:13.905774   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:16.445295   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:16.457705   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:16.457784   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:16.490557   63216 cri.go:89] found id: ""
	I0819 18:14:16.490586   63216 logs.go:276] 0 containers: []
	W0819 18:14:16.490594   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:16.490600   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:16.490665   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:16.523015   63216 cri.go:89] found id: ""
	I0819 18:14:16.523047   63216 logs.go:276] 0 containers: []
	W0819 18:14:16.523055   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:16.523062   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:16.523122   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:16.557966   63216 cri.go:89] found id: ""
	I0819 18:14:16.557990   63216 logs.go:276] 0 containers: []
	W0819 18:14:16.558000   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:16.558007   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:16.558069   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:16.590653   63216 cri.go:89] found id: ""
	I0819 18:14:16.590678   63216 logs.go:276] 0 containers: []
	W0819 18:14:16.590685   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:16.590691   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:16.590755   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:16.623454   63216 cri.go:89] found id: ""
	I0819 18:14:16.623484   63216 logs.go:276] 0 containers: []
	W0819 18:14:16.623492   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:16.623499   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:16.623563   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:16.656189   63216 cri.go:89] found id: ""
	I0819 18:14:16.656215   63216 logs.go:276] 0 containers: []
	W0819 18:14:16.656223   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:16.656229   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:16.656275   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:16.688931   63216 cri.go:89] found id: ""
	I0819 18:14:16.688966   63216 logs.go:276] 0 containers: []
	W0819 18:14:16.688978   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:16.688985   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:16.689050   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:16.720838   63216 cri.go:89] found id: ""
	I0819 18:14:16.720869   63216 logs.go:276] 0 containers: []
	W0819 18:14:16.720880   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:16.720891   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:16.720910   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:16.787816   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:16.787839   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:16.787855   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:16.867869   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:16.867909   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:16.904418   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:16.904443   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:16.955265   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:16.955306   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:19.468794   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:19.481935   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:19.482015   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:19.514483   63216 cri.go:89] found id: ""
	I0819 18:14:19.514509   63216 logs.go:276] 0 containers: []
	W0819 18:14:19.514517   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:19.514523   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:19.514575   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:19.547241   63216 cri.go:89] found id: ""
	I0819 18:14:19.547270   63216 logs.go:276] 0 containers: []
	W0819 18:14:19.547281   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:19.547289   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:19.547349   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:19.580612   63216 cri.go:89] found id: ""
	I0819 18:14:19.580644   63216 logs.go:276] 0 containers: []
	W0819 18:14:19.580654   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:19.580662   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:19.580734   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:19.612611   63216 cri.go:89] found id: ""
	I0819 18:14:19.612640   63216 logs.go:276] 0 containers: []
	W0819 18:14:19.612651   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:19.612659   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:19.612728   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:19.644682   63216 cri.go:89] found id: ""
	I0819 18:14:19.644707   63216 logs.go:276] 0 containers: []
	W0819 18:14:19.644717   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:19.644723   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:19.644804   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:19.678042   63216 cri.go:89] found id: ""
	I0819 18:14:19.678071   63216 logs.go:276] 0 containers: []
	W0819 18:14:19.678080   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:19.678088   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:19.678155   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:19.710963   63216 cri.go:89] found id: ""
	I0819 18:14:19.710988   63216 logs.go:276] 0 containers: []
	W0819 18:14:19.710995   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:19.711001   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:19.711058   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:19.744899   63216 cri.go:89] found id: ""
	I0819 18:14:19.744931   63216 logs.go:276] 0 containers: []
	W0819 18:14:19.744942   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:19.744954   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:19.744970   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:19.795850   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:19.795885   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:19.808797   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:19.808825   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:19.875119   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:19.875138   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:19.875150   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:19.951296   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:19.951333   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:22.488829   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:22.502110   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:22.502172   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:22.540054   63216 cri.go:89] found id: ""
	I0819 18:14:22.540083   63216 logs.go:276] 0 containers: []
	W0819 18:14:22.540093   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:22.540100   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:22.540161   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:22.571797   63216 cri.go:89] found id: ""
	I0819 18:14:22.571822   63216 logs.go:276] 0 containers: []
	W0819 18:14:22.571833   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:22.571841   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:22.571902   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:22.605526   63216 cri.go:89] found id: ""
	I0819 18:14:22.605553   63216 logs.go:276] 0 containers: []
	W0819 18:14:22.605573   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:22.605583   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:22.605639   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:22.639498   63216 cri.go:89] found id: ""
	I0819 18:14:22.639523   63216 logs.go:276] 0 containers: []
	W0819 18:14:22.639531   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:22.639537   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:22.639597   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:22.673041   63216 cri.go:89] found id: ""
	I0819 18:14:22.673067   63216 logs.go:276] 0 containers: []
	W0819 18:14:22.673076   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:22.673083   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:22.673139   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:22.706572   63216 cri.go:89] found id: ""
	I0819 18:14:22.706615   63216 logs.go:276] 0 containers: []
	W0819 18:14:22.706625   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:22.706637   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:22.706688   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:22.737822   63216 cri.go:89] found id: ""
	I0819 18:14:22.737848   63216 logs.go:276] 0 containers: []
	W0819 18:14:22.737857   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:22.737862   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:22.737917   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:22.770506   63216 cri.go:89] found id: ""
	I0819 18:14:22.770536   63216 logs.go:276] 0 containers: []
	W0819 18:14:22.770543   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:22.770551   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:22.770562   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:22.847413   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:22.847451   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:22.886844   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:22.886869   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:22.939166   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:22.939204   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:22.954142   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:22.954167   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:23.028001   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:25.528805   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:25.541344   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:25.541433   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:25.579181   63216 cri.go:89] found id: ""
	I0819 18:14:25.579202   63216 logs.go:276] 0 containers: []
	W0819 18:14:25.579209   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:25.579214   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:25.579301   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:25.612954   63216 cri.go:89] found id: ""
	I0819 18:14:25.612989   63216 logs.go:276] 0 containers: []
	W0819 18:14:25.613002   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:25.613011   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:25.613072   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:25.649243   63216 cri.go:89] found id: ""
	I0819 18:14:25.649271   63216 logs.go:276] 0 containers: []
	W0819 18:14:25.649278   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:25.649284   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:25.649332   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:25.682601   63216 cri.go:89] found id: ""
	I0819 18:14:25.682629   63216 logs.go:276] 0 containers: []
	W0819 18:14:25.682637   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:25.682642   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:25.682694   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:25.715258   63216 cri.go:89] found id: ""
	I0819 18:14:25.715284   63216 logs.go:276] 0 containers: []
	W0819 18:14:25.715292   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:25.715297   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:25.715346   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:25.747392   63216 cri.go:89] found id: ""
	I0819 18:14:25.747420   63216 logs.go:276] 0 containers: []
	W0819 18:14:25.747429   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:25.747435   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:25.747487   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:25.779032   63216 cri.go:89] found id: ""
	I0819 18:14:25.779058   63216 logs.go:276] 0 containers: []
	W0819 18:14:25.779066   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:25.779072   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:25.779118   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:25.813359   63216 cri.go:89] found id: ""
	I0819 18:14:25.813381   63216 logs.go:276] 0 containers: []
	W0819 18:14:25.813389   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:25.813396   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:25.813409   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:25.879037   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:25.879060   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:25.879072   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:25.956299   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:25.956345   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:25.992714   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:25.992741   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:26.042490   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:26.042531   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:28.556445   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:28.569412   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:28.569471   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:28.602083   63216 cri.go:89] found id: ""
	I0819 18:14:28.602112   63216 logs.go:276] 0 containers: []
	W0819 18:14:28.602121   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:28.602127   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:28.602176   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:28.639664   63216 cri.go:89] found id: ""
	I0819 18:14:28.639695   63216 logs.go:276] 0 containers: []
	W0819 18:14:28.639709   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:28.639718   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:28.639782   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:28.673246   63216 cri.go:89] found id: ""
	I0819 18:14:28.673275   63216 logs.go:276] 0 containers: []
	W0819 18:14:28.673287   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:28.673294   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:28.673358   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:28.706464   63216 cri.go:89] found id: ""
	I0819 18:14:28.706494   63216 logs.go:276] 0 containers: []
	W0819 18:14:28.706501   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:28.706506   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:28.706566   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:28.739013   63216 cri.go:89] found id: ""
	I0819 18:14:28.739039   63216 logs.go:276] 0 containers: []
	W0819 18:14:28.739046   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:28.739052   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:28.739100   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:28.771829   63216 cri.go:89] found id: ""
	I0819 18:14:28.771872   63216 logs.go:276] 0 containers: []
	W0819 18:14:28.771884   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:28.771891   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:28.771959   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:28.803802   63216 cri.go:89] found id: ""
	I0819 18:14:28.803826   63216 logs.go:276] 0 containers: []
	W0819 18:14:28.803837   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:28.803844   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:28.803910   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:28.836368   63216 cri.go:89] found id: ""
	I0819 18:14:28.836392   63216 logs.go:276] 0 containers: []
	W0819 18:14:28.836400   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:28.836408   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:28.836422   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:28.885496   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:28.885529   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:28.898578   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:28.898612   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:28.961577   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:28.961610   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:28.961627   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:29.037092   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:29.037132   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:31.575424   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:31.587904   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:31.587963   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:31.621519   63216 cri.go:89] found id: ""
	I0819 18:14:31.621546   63216 logs.go:276] 0 containers: []
	W0819 18:14:31.621554   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:31.621560   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:31.621611   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:31.658011   63216 cri.go:89] found id: ""
	I0819 18:14:31.658036   63216 logs.go:276] 0 containers: []
	W0819 18:14:31.658043   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:31.658050   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:31.658103   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:31.690881   63216 cri.go:89] found id: ""
	I0819 18:14:31.690911   63216 logs.go:276] 0 containers: []
	W0819 18:14:31.690920   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:31.690925   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:31.690977   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:31.721377   63216 cri.go:89] found id: ""
	I0819 18:14:31.721406   63216 logs.go:276] 0 containers: []
	W0819 18:14:31.721414   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:31.721420   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:31.721468   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:31.755707   63216 cri.go:89] found id: ""
	I0819 18:14:31.755733   63216 logs.go:276] 0 containers: []
	W0819 18:14:31.755741   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:31.755746   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:31.755799   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:31.789497   63216 cri.go:89] found id: ""
	I0819 18:14:31.789528   63216 logs.go:276] 0 containers: []
	W0819 18:14:31.789538   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:31.789546   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:31.789614   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:31.822164   63216 cri.go:89] found id: ""
	I0819 18:14:31.822189   63216 logs.go:276] 0 containers: []
	W0819 18:14:31.822196   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:31.822202   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:31.822251   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:31.851647   63216 cri.go:89] found id: ""
	I0819 18:14:31.851675   63216 logs.go:276] 0 containers: []
	W0819 18:14:31.851686   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:31.851697   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:31.851709   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:31.864885   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:31.864911   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:31.931398   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:31.931428   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:31.931459   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:32.005606   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:32.005646   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:32.040414   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:32.040441   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:34.591474   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:34.604396   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:34.604453   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:34.640212   63216 cri.go:89] found id: ""
	I0819 18:14:34.640242   63216 logs.go:276] 0 containers: []
	W0819 18:14:34.640250   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:34.640256   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:34.640315   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:34.674093   63216 cri.go:89] found id: ""
	I0819 18:14:34.674122   63216 logs.go:276] 0 containers: []
	W0819 18:14:34.674130   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:34.674137   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:34.674190   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:34.706500   63216 cri.go:89] found id: ""
	I0819 18:14:34.706527   63216 logs.go:276] 0 containers: []
	W0819 18:14:34.706535   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:34.706540   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:34.706588   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:34.739843   63216 cri.go:89] found id: ""
	I0819 18:14:34.739866   63216 logs.go:276] 0 containers: []
	W0819 18:14:34.739874   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:34.739879   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:34.739926   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:34.776450   63216 cri.go:89] found id: ""
	I0819 18:14:34.776474   63216 logs.go:276] 0 containers: []
	W0819 18:14:34.776481   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:34.776486   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:34.776535   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:34.808032   63216 cri.go:89] found id: ""
	I0819 18:14:34.808062   63216 logs.go:276] 0 containers: []
	W0819 18:14:34.808074   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:34.808081   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:34.808147   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:34.839386   63216 cri.go:89] found id: ""
	I0819 18:14:34.839410   63216 logs.go:276] 0 containers: []
	W0819 18:14:34.839426   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:34.839433   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:34.839490   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:34.870659   63216 cri.go:89] found id: ""
	I0819 18:14:34.870683   63216 logs.go:276] 0 containers: []
	W0819 18:14:34.870690   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:34.870698   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:34.870709   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:34.921624   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:34.921657   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:34.934547   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:34.934573   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:34.996846   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:34.996870   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:34.996883   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:35.072163   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:35.072203   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:37.610265   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:37.622605   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:37.622658   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:37.659545   63216 cri.go:89] found id: ""
	I0819 18:14:37.659570   63216 logs.go:276] 0 containers: []
	W0819 18:14:37.659581   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:37.659587   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:37.659633   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:37.692772   63216 cri.go:89] found id: ""
	I0819 18:14:37.692802   63216 logs.go:276] 0 containers: []
	W0819 18:14:37.692812   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:37.692819   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:37.692877   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:37.727444   63216 cri.go:89] found id: ""
	I0819 18:14:37.727473   63216 logs.go:276] 0 containers: []
	W0819 18:14:37.727483   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:37.727491   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:37.727550   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:37.758049   63216 cri.go:89] found id: ""
	I0819 18:14:37.758081   63216 logs.go:276] 0 containers: []
	W0819 18:14:37.758092   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:37.758100   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:37.758166   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:37.790155   63216 cri.go:89] found id: ""
	I0819 18:14:37.790181   63216 logs.go:276] 0 containers: []
	W0819 18:14:37.790190   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:37.790198   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:37.790260   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:37.824925   63216 cri.go:89] found id: ""
	I0819 18:14:37.824954   63216 logs.go:276] 0 containers: []
	W0819 18:14:37.824963   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:37.824970   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:37.825034   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:37.856080   63216 cri.go:89] found id: ""
	I0819 18:14:37.856106   63216 logs.go:276] 0 containers: []
	W0819 18:14:37.856114   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:37.856120   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:37.856165   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:37.893758   63216 cri.go:89] found id: ""
	I0819 18:14:37.893780   63216 logs.go:276] 0 containers: []
	W0819 18:14:37.893788   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:37.893796   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:37.893807   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:37.930077   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:37.930105   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:37.982189   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:37.982222   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:37.994718   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:37.994743   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:38.063156   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:38.063177   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:38.063190   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:40.645647   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:40.657745   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:40.657805   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:40.689988   63216 cri.go:89] found id: ""
	I0819 18:14:40.690018   63216 logs.go:276] 0 containers: []
	W0819 18:14:40.690030   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:40.690038   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:40.690088   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:40.721233   63216 cri.go:89] found id: ""
	I0819 18:14:40.721263   63216 logs.go:276] 0 containers: []
	W0819 18:14:40.721273   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:40.721281   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:40.721341   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:40.751337   63216 cri.go:89] found id: ""
	I0819 18:14:40.751360   63216 logs.go:276] 0 containers: []
	W0819 18:14:40.751368   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:40.751373   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:40.751431   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:40.784811   63216 cri.go:89] found id: ""
	I0819 18:14:40.784839   63216 logs.go:276] 0 containers: []
	W0819 18:14:40.784849   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:40.784857   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:40.784920   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:40.819232   63216 cri.go:89] found id: ""
	I0819 18:14:40.819268   63216 logs.go:276] 0 containers: []
	W0819 18:14:40.819278   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:40.819288   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:40.819347   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:40.849176   63216 cri.go:89] found id: ""
	I0819 18:14:40.849203   63216 logs.go:276] 0 containers: []
	W0819 18:14:40.849213   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:40.849221   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:40.849280   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:40.888810   63216 cri.go:89] found id: ""
	I0819 18:14:40.888834   63216 logs.go:276] 0 containers: []
	W0819 18:14:40.888842   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:40.888848   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:40.888906   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:40.920601   63216 cri.go:89] found id: ""
	I0819 18:14:40.920632   63216 logs.go:276] 0 containers: []
	W0819 18:14:40.920644   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:40.920654   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:40.920665   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:40.973469   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:40.973505   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:40.986579   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:40.986604   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:41.055314   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:41.055339   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:41.055351   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:41.137354   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:41.137390   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:43.676251   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:43.688959   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:43.689036   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:43.724535   63216 cri.go:89] found id: ""
	I0819 18:14:43.724570   63216 logs.go:276] 0 containers: []
	W0819 18:14:43.724582   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:43.724590   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:43.724654   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:43.755660   63216 cri.go:89] found id: ""
	I0819 18:14:43.755683   63216 logs.go:276] 0 containers: []
	W0819 18:14:43.755691   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:43.755696   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:43.755750   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:43.788743   63216 cri.go:89] found id: ""
	I0819 18:14:43.788783   63216 logs.go:276] 0 containers: []
	W0819 18:14:43.788792   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:43.788798   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:43.788856   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:43.822009   63216 cri.go:89] found id: ""
	I0819 18:14:43.822033   63216 logs.go:276] 0 containers: []
	W0819 18:14:43.822040   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:43.822048   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:43.822113   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:43.853604   63216 cri.go:89] found id: ""
	I0819 18:14:43.853629   63216 logs.go:276] 0 containers: []
	W0819 18:14:43.853638   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:43.853643   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:43.853693   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:43.887512   63216 cri.go:89] found id: ""
	I0819 18:14:43.887538   63216 logs.go:276] 0 containers: []
	W0819 18:14:43.887546   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:43.887552   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:43.887610   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:43.920363   63216 cri.go:89] found id: ""
	I0819 18:14:43.920392   63216 logs.go:276] 0 containers: []
	W0819 18:14:43.920404   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:43.920411   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:43.920480   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:43.956430   63216 cri.go:89] found id: ""
	I0819 18:14:43.956462   63216 logs.go:276] 0 containers: []
	W0819 18:14:43.956473   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:43.956485   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:43.956500   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:44.037570   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:44.037612   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:44.076803   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:44.076840   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:44.130257   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:44.130293   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:44.145528   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:44.145569   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:44.221548   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:46.722471   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:46.735380   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:46.735472   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:46.768018   63216 cri.go:89] found id: ""
	I0819 18:14:46.768041   63216 logs.go:276] 0 containers: []
	W0819 18:14:46.768048   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:46.768057   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:46.768104   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:46.802078   63216 cri.go:89] found id: ""
	I0819 18:14:46.802104   63216 logs.go:276] 0 containers: []
	W0819 18:14:46.802111   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:46.802117   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:46.802167   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:46.834485   63216 cri.go:89] found id: ""
	I0819 18:14:46.834510   63216 logs.go:276] 0 containers: []
	W0819 18:14:46.834517   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:46.834523   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:46.834571   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:46.866363   63216 cri.go:89] found id: ""
	I0819 18:14:46.866391   63216 logs.go:276] 0 containers: []
	W0819 18:14:46.866399   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:46.866405   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:46.866465   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:46.899637   63216 cri.go:89] found id: ""
	I0819 18:14:46.899663   63216 logs.go:276] 0 containers: []
	W0819 18:14:46.899672   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:46.899678   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:46.899736   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:46.932272   63216 cri.go:89] found id: ""
	I0819 18:14:46.932303   63216 logs.go:276] 0 containers: []
	W0819 18:14:46.932315   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:46.932323   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:46.932387   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:46.964128   63216 cri.go:89] found id: ""
	I0819 18:14:46.964154   63216 logs.go:276] 0 containers: []
	W0819 18:14:46.964162   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:46.964168   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:46.964217   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:47.000965   63216 cri.go:89] found id: ""
	I0819 18:14:47.000990   63216 logs.go:276] 0 containers: []
	W0819 18:14:47.000997   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:47.001008   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:47.001020   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:47.041490   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:47.041520   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:47.089063   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:47.089101   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:47.102664   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:47.102700   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:47.171109   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:47.171129   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:47.171144   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:49.748444   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:49.762555   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:49.762620   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:49.794835   63216 cri.go:89] found id: ""
	I0819 18:14:49.794863   63216 logs.go:276] 0 containers: []
	W0819 18:14:49.794872   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:49.794878   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:49.794930   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:49.825097   63216 cri.go:89] found id: ""
	I0819 18:14:49.825122   63216 logs.go:276] 0 containers: []
	W0819 18:14:49.825130   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:49.825135   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:49.825184   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:49.856728   63216 cri.go:89] found id: ""
	I0819 18:14:49.856771   63216 logs.go:276] 0 containers: []
	W0819 18:14:49.856783   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:49.856790   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:49.856852   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:49.893530   63216 cri.go:89] found id: ""
	I0819 18:14:49.893562   63216 logs.go:276] 0 containers: []
	W0819 18:14:49.893570   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:49.893575   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:49.893636   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:49.929925   63216 cri.go:89] found id: ""
	I0819 18:14:49.929950   63216 logs.go:276] 0 containers: []
	W0819 18:14:49.929958   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:49.929964   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:49.930017   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:49.964580   63216 cri.go:89] found id: ""
	I0819 18:14:49.964610   63216 logs.go:276] 0 containers: []
	W0819 18:14:49.964621   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:49.964631   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:49.964685   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:49.998590   63216 cri.go:89] found id: ""
	I0819 18:14:49.998622   63216 logs.go:276] 0 containers: []
	W0819 18:14:49.998632   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:49.998640   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:49.998705   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:50.037182   63216 cri.go:89] found id: ""
	I0819 18:14:50.037205   63216 logs.go:276] 0 containers: []
	W0819 18:14:50.037213   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:50.037223   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:50.037237   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:50.076763   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:50.076794   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:50.130348   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:50.130384   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:50.144053   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:50.144080   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:50.216323   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:50.216347   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:50.216360   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:52.792852   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:52.804735   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:52.804805   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:52.835543   63216 cri.go:89] found id: ""
	I0819 18:14:52.835576   63216 logs.go:276] 0 containers: []
	W0819 18:14:52.835589   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:52.835598   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:52.835662   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:52.868222   63216 cri.go:89] found id: ""
	I0819 18:14:52.868252   63216 logs.go:276] 0 containers: []
	W0819 18:14:52.868263   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:52.868269   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:52.868337   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:52.902086   63216 cri.go:89] found id: ""
	I0819 18:14:52.902111   63216 logs.go:276] 0 containers: []
	W0819 18:14:52.902118   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:52.902124   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:52.902173   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:52.935716   63216 cri.go:89] found id: ""
	I0819 18:14:52.935744   63216 logs.go:276] 0 containers: []
	W0819 18:14:52.935751   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:52.935761   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:52.935816   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:52.967256   63216 cri.go:89] found id: ""
	I0819 18:14:52.967287   63216 logs.go:276] 0 containers: []
	W0819 18:14:52.967296   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:52.967302   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:52.967362   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:53.016292   63216 cri.go:89] found id: ""
	I0819 18:14:53.016320   63216 logs.go:276] 0 containers: []
	W0819 18:14:53.016330   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:53.016338   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:53.016419   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:53.050294   63216 cri.go:89] found id: ""
	I0819 18:14:53.050322   63216 logs.go:276] 0 containers: []
	W0819 18:14:53.050330   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:53.050337   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:53.050409   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:53.082430   63216 cri.go:89] found id: ""
	I0819 18:14:53.082458   63216 logs.go:276] 0 containers: []
	W0819 18:14:53.082465   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:53.082473   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:53.082485   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:53.131239   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:53.131273   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:53.143683   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:53.143713   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:53.216850   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:53.216876   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:53.216892   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:53.296090   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:53.296138   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:55.837642   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:55.850636   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:55.850704   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:55.883073   63216 cri.go:89] found id: ""
	I0819 18:14:55.883104   63216 logs.go:276] 0 containers: []
	W0819 18:14:55.883111   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:55.883119   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:55.883169   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:55.913481   63216 cri.go:89] found id: ""
	I0819 18:14:55.913508   63216 logs.go:276] 0 containers: []
	W0819 18:14:55.913518   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:55.913524   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:55.913581   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:55.945801   63216 cri.go:89] found id: ""
	I0819 18:14:55.945826   63216 logs.go:276] 0 containers: []
	W0819 18:14:55.945834   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:55.945841   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:55.945900   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:55.977581   63216 cri.go:89] found id: ""
	I0819 18:14:55.977609   63216 logs.go:276] 0 containers: []
	W0819 18:14:55.977617   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:55.977625   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:55.977690   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:56.013567   63216 cri.go:89] found id: ""
	I0819 18:14:56.013592   63216 logs.go:276] 0 containers: []
	W0819 18:14:56.013600   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:56.013606   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:56.013662   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:56.045823   63216 cri.go:89] found id: ""
	I0819 18:14:56.045859   63216 logs.go:276] 0 containers: []
	W0819 18:14:56.045867   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:56.045876   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:56.045942   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:56.077574   63216 cri.go:89] found id: ""
	I0819 18:14:56.077603   63216 logs.go:276] 0 containers: []
	W0819 18:14:56.077614   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:56.077622   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:56.077690   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:56.107689   63216 cri.go:89] found id: ""
	I0819 18:14:56.107721   63216 logs.go:276] 0 containers: []
	W0819 18:14:56.107731   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:56.107742   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:56.107757   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:56.158789   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:56.158821   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:56.173344   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:56.173377   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:56.235538   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:56.235560   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:56.235578   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:56.311583   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:56.311621   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:14:58.848424   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:14:58.860778   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:14:58.860838   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:14:58.894573   63216 cri.go:89] found id: ""
	I0819 18:14:58.894598   63216 logs.go:276] 0 containers: []
	W0819 18:14:58.894609   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:14:58.894618   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:14:58.894687   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:14:58.929818   63216 cri.go:89] found id: ""
	I0819 18:14:58.929849   63216 logs.go:276] 0 containers: []
	W0819 18:14:58.929860   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:14:58.929867   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:14:58.929931   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:14:58.963683   63216 cri.go:89] found id: ""
	I0819 18:14:58.963717   63216 logs.go:276] 0 containers: []
	W0819 18:14:58.963727   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:14:58.963735   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:14:58.963797   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:14:58.996537   63216 cri.go:89] found id: ""
	I0819 18:14:58.996566   63216 logs.go:276] 0 containers: []
	W0819 18:14:58.996575   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:14:58.996582   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:14:58.996652   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:14:59.029917   63216 cri.go:89] found id: ""
	I0819 18:14:59.029944   63216 logs.go:276] 0 containers: []
	W0819 18:14:59.029951   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:14:59.029956   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:14:59.030002   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:14:59.064411   63216 cri.go:89] found id: ""
	I0819 18:14:59.064438   63216 logs.go:276] 0 containers: []
	W0819 18:14:59.064446   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:14:59.064451   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:14:59.064545   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:14:59.097457   63216 cri.go:89] found id: ""
	I0819 18:14:59.097484   63216 logs.go:276] 0 containers: []
	W0819 18:14:59.097492   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:14:59.097497   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:14:59.097544   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:14:59.134635   63216 cri.go:89] found id: ""
	I0819 18:14:59.134658   63216 logs.go:276] 0 containers: []
	W0819 18:14:59.134666   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:14:59.134675   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:14:59.134688   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:14:59.188123   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:14:59.188164   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:14:59.200974   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:14:59.201005   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:14:59.270473   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:14:59.270497   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:14:59.270513   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:14:59.345811   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:14:59.345848   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:01.885851   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:01.898388   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:01.898454   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:01.930717   63216 cri.go:89] found id: ""
	I0819 18:15:01.930741   63216 logs.go:276] 0 containers: []
	W0819 18:15:01.930748   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:01.930753   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:01.930798   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:01.963462   63216 cri.go:89] found id: ""
	I0819 18:15:01.963487   63216 logs.go:276] 0 containers: []
	W0819 18:15:01.963497   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:01.963503   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:01.963564   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:01.999902   63216 cri.go:89] found id: ""
	I0819 18:15:01.999930   63216 logs.go:276] 0 containers: []
	W0819 18:15:01.999938   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:01.999944   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:01.999993   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:02.032298   63216 cri.go:89] found id: ""
	I0819 18:15:02.032336   63216 logs.go:276] 0 containers: []
	W0819 18:15:02.032345   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:02.032351   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:02.032413   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:02.067483   63216 cri.go:89] found id: ""
	I0819 18:15:02.067510   63216 logs.go:276] 0 containers: []
	W0819 18:15:02.067521   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:02.067546   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:02.067605   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:02.098700   63216 cri.go:89] found id: ""
	I0819 18:15:02.098725   63216 logs.go:276] 0 containers: []
	W0819 18:15:02.098732   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:02.098737   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:02.098794   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:02.130911   63216 cri.go:89] found id: ""
	I0819 18:15:02.130934   63216 logs.go:276] 0 containers: []
	W0819 18:15:02.130942   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:02.130950   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:02.131010   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:02.162126   63216 cri.go:89] found id: ""
	I0819 18:15:02.162152   63216 logs.go:276] 0 containers: []
	W0819 18:15:02.162160   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:02.162168   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:02.162185   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:02.215420   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:02.215456   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:02.229339   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:02.229372   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:02.302198   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:02.302222   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:02.302235   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:02.381923   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:02.381954   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:04.918144   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:04.931477   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:04.931539   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:04.966083   63216 cri.go:89] found id: ""
	I0819 18:15:04.966110   63216 logs.go:276] 0 containers: []
	W0819 18:15:04.966121   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:04.966128   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:04.966190   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:04.998311   63216 cri.go:89] found id: ""
	I0819 18:15:04.998358   63216 logs.go:276] 0 containers: []
	W0819 18:15:04.998369   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:04.998376   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:04.998442   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:05.032024   63216 cri.go:89] found id: ""
	I0819 18:15:05.032051   63216 logs.go:276] 0 containers: []
	W0819 18:15:05.032058   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:05.032063   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:05.032124   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:05.063876   63216 cri.go:89] found id: ""
	I0819 18:15:05.063905   63216 logs.go:276] 0 containers: []
	W0819 18:15:05.063914   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:05.063920   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:05.063981   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:05.097689   63216 cri.go:89] found id: ""
	I0819 18:15:05.097717   63216 logs.go:276] 0 containers: []
	W0819 18:15:05.097727   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:05.097734   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:05.097796   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:05.129945   63216 cri.go:89] found id: ""
	I0819 18:15:05.129968   63216 logs.go:276] 0 containers: []
	W0819 18:15:05.129976   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:05.129982   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:05.130035   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:05.173400   63216 cri.go:89] found id: ""
	I0819 18:15:05.173427   63216 logs.go:276] 0 containers: []
	W0819 18:15:05.173436   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:05.173444   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:05.173499   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:05.221477   63216 cri.go:89] found id: ""
	I0819 18:15:05.221504   63216 logs.go:276] 0 containers: []
	W0819 18:15:05.221514   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:05.221525   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:05.221537   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:05.302005   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:05.302036   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:05.342251   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:05.342288   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:05.395531   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:05.395564   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:05.408888   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:05.408923   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:05.474491   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:07.974945   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:07.988953   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:07.989012   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:08.025484   63216 cri.go:89] found id: ""
	I0819 18:15:08.025509   63216 logs.go:276] 0 containers: []
	W0819 18:15:08.025518   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:08.025524   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:08.025580   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:08.062619   63216 cri.go:89] found id: ""
	I0819 18:15:08.062650   63216 logs.go:276] 0 containers: []
	W0819 18:15:08.062662   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:08.062676   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:08.062723   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:08.100969   63216 cri.go:89] found id: ""
	I0819 18:15:08.100999   63216 logs.go:276] 0 containers: []
	W0819 18:15:08.101007   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:08.101015   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:08.101080   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:08.140115   63216 cri.go:89] found id: ""
	I0819 18:15:08.140141   63216 logs.go:276] 0 containers: []
	W0819 18:15:08.140148   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:08.140154   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:08.140200   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:08.171579   63216 cri.go:89] found id: ""
	I0819 18:15:08.171609   63216 logs.go:276] 0 containers: []
	W0819 18:15:08.171617   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:08.171622   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:08.171684   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:08.202964   63216 cri.go:89] found id: ""
	I0819 18:15:08.202991   63216 logs.go:276] 0 containers: []
	W0819 18:15:08.202999   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:08.203005   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:08.203062   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:08.233823   63216 cri.go:89] found id: ""
	I0819 18:15:08.233854   63216 logs.go:276] 0 containers: []
	W0819 18:15:08.233864   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:08.233871   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:08.233930   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:08.268513   63216 cri.go:89] found id: ""
	I0819 18:15:08.268544   63216 logs.go:276] 0 containers: []
	W0819 18:15:08.268555   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:08.268566   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:08.268580   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:08.344800   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:08.344833   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:08.386044   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:08.386071   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:08.440545   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:08.440580   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:08.453594   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:08.453623   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:08.529756   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:11.029901   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:11.041607   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:11.041682   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:11.080806   63216 cri.go:89] found id: ""
	I0819 18:15:11.080834   63216 logs.go:276] 0 containers: []
	W0819 18:15:11.080842   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:11.080848   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:11.080897   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:11.113470   63216 cri.go:89] found id: ""
	I0819 18:15:11.113500   63216 logs.go:276] 0 containers: []
	W0819 18:15:11.113509   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:11.113516   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:11.113577   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:11.149992   63216 cri.go:89] found id: ""
	I0819 18:15:11.150020   63216 logs.go:276] 0 containers: []
	W0819 18:15:11.150028   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:11.150033   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:11.150082   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:11.181924   63216 cri.go:89] found id: ""
	I0819 18:15:11.181953   63216 logs.go:276] 0 containers: []
	W0819 18:15:11.181962   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:11.181970   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:11.182031   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:11.216963   63216 cri.go:89] found id: ""
	I0819 18:15:11.216995   63216 logs.go:276] 0 containers: []
	W0819 18:15:11.217003   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:11.217009   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:11.217066   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:11.250200   63216 cri.go:89] found id: ""
	I0819 18:15:11.250226   63216 logs.go:276] 0 containers: []
	W0819 18:15:11.250234   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:11.250240   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:11.250295   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:11.283383   63216 cri.go:89] found id: ""
	I0819 18:15:11.283411   63216 logs.go:276] 0 containers: []
	W0819 18:15:11.283419   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:11.283426   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:11.283475   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:11.317618   63216 cri.go:89] found id: ""
	I0819 18:15:11.317646   63216 logs.go:276] 0 containers: []
	W0819 18:15:11.317656   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:11.317668   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:11.317684   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:11.367008   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:11.367040   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:11.382114   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:11.382138   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:11.454655   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:11.454674   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:11.454688   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:11.530306   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:11.530345   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:14.071818   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:14.084252   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:14.084312   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:14.120237   63216 cri.go:89] found id: ""
	I0819 18:15:14.120264   63216 logs.go:276] 0 containers: []
	W0819 18:15:14.120271   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:14.120277   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:14.120323   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:14.155528   63216 cri.go:89] found id: ""
	I0819 18:15:14.155557   63216 logs.go:276] 0 containers: []
	W0819 18:15:14.155568   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:14.155575   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:14.155644   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:14.193202   63216 cri.go:89] found id: ""
	I0819 18:15:14.193225   63216 logs.go:276] 0 containers: []
	W0819 18:15:14.193232   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:14.193237   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:14.193298   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:14.229405   63216 cri.go:89] found id: ""
	I0819 18:15:14.229431   63216 logs.go:276] 0 containers: []
	W0819 18:15:14.229439   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:14.229444   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:14.229506   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:14.268509   63216 cri.go:89] found id: ""
	I0819 18:15:14.268538   63216 logs.go:276] 0 containers: []
	W0819 18:15:14.268546   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:14.268551   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:14.268612   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:14.308466   63216 cri.go:89] found id: ""
	I0819 18:15:14.308497   63216 logs.go:276] 0 containers: []
	W0819 18:15:14.308509   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:14.308517   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:14.308577   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:14.350306   63216 cri.go:89] found id: ""
	I0819 18:15:14.350338   63216 logs.go:276] 0 containers: []
	W0819 18:15:14.350350   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:14.350358   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:14.350426   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:14.382686   63216 cri.go:89] found id: ""
	I0819 18:15:14.382708   63216 logs.go:276] 0 containers: []
	W0819 18:15:14.382716   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:14.382724   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:14.382736   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:14.465159   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:14.465196   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:14.499531   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:14.499561   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:14.549682   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:14.549716   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:14.562194   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:14.562217   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:14.630184   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:17.131132   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:17.143544   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:17.143616   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:17.182209   63216 cri.go:89] found id: ""
	I0819 18:15:17.182238   63216 logs.go:276] 0 containers: []
	W0819 18:15:17.182248   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:17.182255   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:17.182302   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:17.216539   63216 cri.go:89] found id: ""
	I0819 18:15:17.216567   63216 logs.go:276] 0 containers: []
	W0819 18:15:17.216577   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:17.216586   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:17.216649   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:17.249880   63216 cri.go:89] found id: ""
	I0819 18:15:17.249906   63216 logs.go:276] 0 containers: []
	W0819 18:15:17.249921   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:17.249927   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:17.249979   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:17.282609   63216 cri.go:89] found id: ""
	I0819 18:15:17.282635   63216 logs.go:276] 0 containers: []
	W0819 18:15:17.282644   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:17.282650   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:17.282700   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:17.316592   63216 cri.go:89] found id: ""
	I0819 18:15:17.316619   63216 logs.go:276] 0 containers: []
	W0819 18:15:17.316628   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:17.316635   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:17.316687   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:17.348770   63216 cri.go:89] found id: ""
	I0819 18:15:17.348798   63216 logs.go:276] 0 containers: []
	W0819 18:15:17.348805   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:17.348812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:17.348870   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:17.381721   63216 cri.go:89] found id: ""
	I0819 18:15:17.381747   63216 logs.go:276] 0 containers: []
	W0819 18:15:17.381758   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:17.381765   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:17.381826   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:17.414945   63216 cri.go:89] found id: ""
	I0819 18:15:17.414974   63216 logs.go:276] 0 containers: []
	W0819 18:15:17.414985   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:17.414995   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:17.415013   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:17.428416   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:17.428451   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:17.494513   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:17.494534   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:17.494549   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:17.577643   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:17.577687   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:17.613404   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:17.613436   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:20.167968   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:20.181166   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:20.181232   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:20.210796   63216 cri.go:89] found id: ""
	I0819 18:15:20.210824   63216 logs.go:276] 0 containers: []
	W0819 18:15:20.210835   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:20.210843   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:20.210906   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:20.243639   63216 cri.go:89] found id: ""
	I0819 18:15:20.243673   63216 logs.go:276] 0 containers: []
	W0819 18:15:20.243684   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:20.243692   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:20.243757   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:20.279847   63216 cri.go:89] found id: ""
	I0819 18:15:20.279879   63216 logs.go:276] 0 containers: []
	W0819 18:15:20.279886   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:20.279893   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:20.279946   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:20.310085   63216 cri.go:89] found id: ""
	I0819 18:15:20.310114   63216 logs.go:276] 0 containers: []
	W0819 18:15:20.310125   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:20.310132   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:20.310180   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:20.341142   63216 cri.go:89] found id: ""
	I0819 18:15:20.341174   63216 logs.go:276] 0 containers: []
	W0819 18:15:20.341182   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:20.341188   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:20.341237   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:20.370901   63216 cri.go:89] found id: ""
	I0819 18:15:20.370931   63216 logs.go:276] 0 containers: []
	W0819 18:15:20.370940   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:20.370951   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:20.371013   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:20.403915   63216 cri.go:89] found id: ""
	I0819 18:15:20.403983   63216 logs.go:276] 0 containers: []
	W0819 18:15:20.403993   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:20.403999   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:20.404055   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:20.436636   63216 cri.go:89] found id: ""
	I0819 18:15:20.436668   63216 logs.go:276] 0 containers: []
	W0819 18:15:20.436679   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:20.436690   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:20.436707   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:20.449370   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:20.449399   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:20.513161   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:20.513193   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:20.513208   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:20.593691   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:20.593726   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:20.636818   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:20.636844   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:23.191724   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:23.203709   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:23.203785   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:23.238797   63216 cri.go:89] found id: ""
	I0819 18:15:23.238822   63216 logs.go:276] 0 containers: []
	W0819 18:15:23.238831   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:23.238836   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:23.238889   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:23.270771   63216 cri.go:89] found id: ""
	I0819 18:15:23.270797   63216 logs.go:276] 0 containers: []
	W0819 18:15:23.270805   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:23.270811   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:23.270859   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:23.304633   63216 cri.go:89] found id: ""
	I0819 18:15:23.304663   63216 logs.go:276] 0 containers: []
	W0819 18:15:23.304672   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:23.304678   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:23.304732   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:23.337425   63216 cri.go:89] found id: ""
	I0819 18:15:23.337456   63216 logs.go:276] 0 containers: []
	W0819 18:15:23.337466   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:23.337474   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:23.337531   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:23.367473   63216 cri.go:89] found id: ""
	I0819 18:15:23.367498   63216 logs.go:276] 0 containers: []
	W0819 18:15:23.367506   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:23.367512   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:23.367557   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:23.400682   63216 cri.go:89] found id: ""
	I0819 18:15:23.400706   63216 logs.go:276] 0 containers: []
	W0819 18:15:23.400714   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:23.400720   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:23.400783   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:23.432013   63216 cri.go:89] found id: ""
	I0819 18:15:23.432039   63216 logs.go:276] 0 containers: []
	W0819 18:15:23.432046   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:23.432052   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:23.432101   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:23.464034   63216 cri.go:89] found id: ""
	I0819 18:15:23.464066   63216 logs.go:276] 0 containers: []
	W0819 18:15:23.464074   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:23.464085   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:23.464095   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:23.542613   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:23.542648   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:23.580920   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:23.580946   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:23.630030   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:23.630078   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:23.643471   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:23.643502   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:23.709946   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:26.211058   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:26.222933   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:26.222988   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:26.258678   63216 cri.go:89] found id: ""
	I0819 18:15:26.258702   63216 logs.go:276] 0 containers: []
	W0819 18:15:26.258709   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:26.258715   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:26.258770   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:26.290705   63216 cri.go:89] found id: ""
	I0819 18:15:26.290734   63216 logs.go:276] 0 containers: []
	W0819 18:15:26.290743   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:26.290748   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:26.290796   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:26.323273   63216 cri.go:89] found id: ""
	I0819 18:15:26.323301   63216 logs.go:276] 0 containers: []
	W0819 18:15:26.323309   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:26.323315   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:26.323362   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:26.354851   63216 cri.go:89] found id: ""
	I0819 18:15:26.354875   63216 logs.go:276] 0 containers: []
	W0819 18:15:26.354882   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:26.354888   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:26.354935   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:26.388327   63216 cri.go:89] found id: ""
	I0819 18:15:26.388355   63216 logs.go:276] 0 containers: []
	W0819 18:15:26.388365   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:26.388373   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:26.388444   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:26.420163   63216 cri.go:89] found id: ""
	I0819 18:15:26.420194   63216 logs.go:276] 0 containers: []
	W0819 18:15:26.420204   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:26.420211   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:26.420273   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:26.451290   63216 cri.go:89] found id: ""
	I0819 18:15:26.451316   63216 logs.go:276] 0 containers: []
	W0819 18:15:26.451365   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:26.451374   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:26.451439   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:26.481757   63216 cri.go:89] found id: ""
	I0819 18:15:26.481792   63216 logs.go:276] 0 containers: []
	W0819 18:15:26.481802   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:26.481820   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:26.481837   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:26.494364   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:26.494399   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:26.566733   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:26.566756   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:26.566770   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:26.645098   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:26.645133   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:26.682639   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:26.682670   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:29.231587   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:29.245017   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:29.245084   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:29.278823   63216 cri.go:89] found id: ""
	I0819 18:15:29.278855   63216 logs.go:276] 0 containers: []
	W0819 18:15:29.278865   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:29.278873   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:29.278930   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:29.312267   63216 cri.go:89] found id: ""
	I0819 18:15:29.312298   63216 logs.go:276] 0 containers: []
	W0819 18:15:29.312308   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:29.312315   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:29.312379   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:29.343967   63216 cri.go:89] found id: ""
	I0819 18:15:29.343993   63216 logs.go:276] 0 containers: []
	W0819 18:15:29.344002   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:29.344008   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:29.344055   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:29.375768   63216 cri.go:89] found id: ""
	I0819 18:15:29.375794   63216 logs.go:276] 0 containers: []
	W0819 18:15:29.375805   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:29.375813   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:29.375875   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:29.407457   63216 cri.go:89] found id: ""
	I0819 18:15:29.407485   63216 logs.go:276] 0 containers: []
	W0819 18:15:29.407493   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:29.407501   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:29.407549   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:29.442560   63216 cri.go:89] found id: ""
	I0819 18:15:29.442594   63216 logs.go:276] 0 containers: []
	W0819 18:15:29.442605   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:29.442613   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:29.442675   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:29.477914   63216 cri.go:89] found id: ""
	I0819 18:15:29.477943   63216 logs.go:276] 0 containers: []
	W0819 18:15:29.477951   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:29.477957   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:29.478007   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:29.511380   63216 cri.go:89] found id: ""
	I0819 18:15:29.511410   63216 logs.go:276] 0 containers: []
	W0819 18:15:29.511422   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:29.511431   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:29.511443   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:29.524389   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:29.524414   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:29.587192   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:29.587214   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:29.587226   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:29.663048   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:29.663091   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:29.702187   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:29.702218   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:32.251423   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:32.264978   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:32.265041   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:32.299657   63216 cri.go:89] found id: ""
	I0819 18:15:32.299684   63216 logs.go:276] 0 containers: []
	W0819 18:15:32.299694   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:32.299700   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:32.299755   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:32.335426   63216 cri.go:89] found id: ""
	I0819 18:15:32.335449   63216 logs.go:276] 0 containers: []
	W0819 18:15:32.335459   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:32.335465   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:32.335519   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:32.368589   63216 cri.go:89] found id: ""
	I0819 18:15:32.368618   63216 logs.go:276] 0 containers: []
	W0819 18:15:32.368629   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:32.368636   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:32.368683   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:32.403458   63216 cri.go:89] found id: ""
	I0819 18:15:32.403486   63216 logs.go:276] 0 containers: []
	W0819 18:15:32.403495   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:32.403500   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:32.403552   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:32.436270   63216 cri.go:89] found id: ""
	I0819 18:15:32.436295   63216 logs.go:276] 0 containers: []
	W0819 18:15:32.436303   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:32.436309   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:32.436360   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:32.468323   63216 cri.go:89] found id: ""
	I0819 18:15:32.468347   63216 logs.go:276] 0 containers: []
	W0819 18:15:32.468357   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:32.468364   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:32.468424   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:32.500186   63216 cri.go:89] found id: ""
	I0819 18:15:32.500209   63216 logs.go:276] 0 containers: []
	W0819 18:15:32.500216   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:32.500222   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:32.500279   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:32.530925   63216 cri.go:89] found id: ""
	I0819 18:15:32.530948   63216 logs.go:276] 0 containers: []
	W0819 18:15:32.530956   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:32.530965   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:32.530978   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:32.582500   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:32.582534   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:32.595128   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:32.595153   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:32.662381   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:32.662408   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:32.662423   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:32.741894   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:32.741928   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:35.282691   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:35.295066   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:35.295123   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:35.328375   63216 cri.go:89] found id: ""
	I0819 18:15:35.328401   63216 logs.go:276] 0 containers: []
	W0819 18:15:35.328409   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:35.328415   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:35.328463   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:35.359361   63216 cri.go:89] found id: ""
	I0819 18:15:35.359386   63216 logs.go:276] 0 containers: []
	W0819 18:15:35.359393   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:35.359398   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:35.359447   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:35.392994   63216 cri.go:89] found id: ""
	I0819 18:15:35.393023   63216 logs.go:276] 0 containers: []
	W0819 18:15:35.393033   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:35.393040   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:35.393105   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:35.427270   63216 cri.go:89] found id: ""
	I0819 18:15:35.427305   63216 logs.go:276] 0 containers: []
	W0819 18:15:35.427316   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:35.427323   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:35.427388   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:35.463513   63216 cri.go:89] found id: ""
	I0819 18:15:35.463542   63216 logs.go:276] 0 containers: []
	W0819 18:15:35.463550   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:35.463555   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:35.463615   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:35.497047   63216 cri.go:89] found id: ""
	I0819 18:15:35.497079   63216 logs.go:276] 0 containers: []
	W0819 18:15:35.497097   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:35.497105   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:35.497166   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:35.529372   63216 cri.go:89] found id: ""
	I0819 18:15:35.529404   63216 logs.go:276] 0 containers: []
	W0819 18:15:35.529412   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:35.529418   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:35.529468   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:35.561438   63216 cri.go:89] found id: ""
	I0819 18:15:35.561467   63216 logs.go:276] 0 containers: []
	W0819 18:15:35.561476   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:35.561484   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:35.561496   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:35.574117   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:35.574144   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:35.638520   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:35.638548   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:35.638565   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:35.716197   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:35.716229   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:35.752233   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:35.752261   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:38.305621   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:38.320044   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:38.320115   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:38.352820   63216 cri.go:89] found id: ""
	I0819 18:15:38.352854   63216 logs.go:276] 0 containers: []
	W0819 18:15:38.352865   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:38.352873   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:38.352932   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:38.386785   63216 cri.go:89] found id: ""
	I0819 18:15:38.386820   63216 logs.go:276] 0 containers: []
	W0819 18:15:38.386831   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:38.386838   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:38.386900   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:38.419859   63216 cri.go:89] found id: ""
	I0819 18:15:38.419884   63216 logs.go:276] 0 containers: []
	W0819 18:15:38.419892   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:38.419899   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:38.419964   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:38.453454   63216 cri.go:89] found id: ""
	I0819 18:15:38.453491   63216 logs.go:276] 0 containers: []
	W0819 18:15:38.453499   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:38.453504   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:38.453568   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:38.486173   63216 cri.go:89] found id: ""
	I0819 18:15:38.486197   63216 logs.go:276] 0 containers: []
	W0819 18:15:38.486205   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:38.486210   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:38.486265   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:38.519058   63216 cri.go:89] found id: ""
	I0819 18:15:38.519083   63216 logs.go:276] 0 containers: []
	W0819 18:15:38.519091   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:38.519097   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:38.519147   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:38.552548   63216 cri.go:89] found id: ""
	I0819 18:15:38.552580   63216 logs.go:276] 0 containers: []
	W0819 18:15:38.552589   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:38.552595   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:38.552662   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:38.586147   63216 cri.go:89] found id: ""
	I0819 18:15:38.586171   63216 logs.go:276] 0 containers: []
	W0819 18:15:38.586179   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:38.586187   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:38.586199   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:38.636432   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:38.636468   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:38.649598   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:38.649628   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:38.716588   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:38.716607   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:38.716622   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:38.798528   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:38.798574   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:41.334978   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:41.347744   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:41.347811   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:41.380963   63216 cri.go:89] found id: ""
	I0819 18:15:41.380994   63216 logs.go:276] 0 containers: []
	W0819 18:15:41.381011   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:41.381020   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:41.381078   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:41.420023   63216 cri.go:89] found id: ""
	I0819 18:15:41.420055   63216 logs.go:276] 0 containers: []
	W0819 18:15:41.420066   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:41.420076   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:41.420141   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:41.459169   63216 cri.go:89] found id: ""
	I0819 18:15:41.459204   63216 logs.go:276] 0 containers: []
	W0819 18:15:41.459215   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:41.459222   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:41.459283   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:41.491302   63216 cri.go:89] found id: ""
	I0819 18:15:41.491330   63216 logs.go:276] 0 containers: []
	W0819 18:15:41.491341   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:41.491349   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:41.491415   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:41.524866   63216 cri.go:89] found id: ""
	I0819 18:15:41.524887   63216 logs.go:276] 0 containers: []
	W0819 18:15:41.524897   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:41.524904   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:41.524964   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:41.558087   63216 cri.go:89] found id: ""
	I0819 18:15:41.558110   63216 logs.go:276] 0 containers: []
	W0819 18:15:41.558117   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:41.558122   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:41.558173   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:41.590308   63216 cri.go:89] found id: ""
	I0819 18:15:41.590351   63216 logs.go:276] 0 containers: []
	W0819 18:15:41.590359   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:41.590377   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:41.590427   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:41.623098   63216 cri.go:89] found id: ""
	I0819 18:15:41.623120   63216 logs.go:276] 0 containers: []
	W0819 18:15:41.623127   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:41.623135   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:41.623148   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:41.674699   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:41.674724   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:41.687757   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:41.687780   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:41.756224   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:41.756247   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:41.756257   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:41.839114   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:41.839154   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:44.377965   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:44.391094   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:44.391148   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:44.423225   63216 cri.go:89] found id: ""
	I0819 18:15:44.423254   63216 logs.go:276] 0 containers: []
	W0819 18:15:44.423262   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:44.423267   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:44.423324   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:44.455645   63216 cri.go:89] found id: ""
	I0819 18:15:44.455674   63216 logs.go:276] 0 containers: []
	W0819 18:15:44.455682   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:44.455687   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:44.455733   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:44.487342   63216 cri.go:89] found id: ""
	I0819 18:15:44.487369   63216 logs.go:276] 0 containers: []
	W0819 18:15:44.487376   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:44.487384   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:44.487437   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:44.519509   63216 cri.go:89] found id: ""
	I0819 18:15:44.519539   63216 logs.go:276] 0 containers: []
	W0819 18:15:44.519550   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:44.519558   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:44.519620   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:44.550870   63216 cri.go:89] found id: ""
	I0819 18:15:44.550897   63216 logs.go:276] 0 containers: []
	W0819 18:15:44.550905   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:44.550911   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:44.550961   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:44.583906   63216 cri.go:89] found id: ""
	I0819 18:15:44.583935   63216 logs.go:276] 0 containers: []
	W0819 18:15:44.583946   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:44.583954   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:44.584015   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:44.616111   63216 cri.go:89] found id: ""
	I0819 18:15:44.616142   63216 logs.go:276] 0 containers: []
	W0819 18:15:44.616154   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:44.616162   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:44.616226   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:44.649685   63216 cri.go:89] found id: ""
	I0819 18:15:44.649710   63216 logs.go:276] 0 containers: []
	W0819 18:15:44.649723   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:44.649732   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:44.649747   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:44.687462   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:44.687490   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:44.738455   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:44.738490   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:44.764056   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:44.764082   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:44.832915   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:44.832941   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:44.832954   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:47.419226   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:47.433896   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:47.433962   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:47.467772   63216 cri.go:89] found id: ""
	I0819 18:15:47.467798   63216 logs.go:276] 0 containers: []
	W0819 18:15:47.467807   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:47.467812   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:47.467874   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:47.503268   63216 cri.go:89] found id: ""
	I0819 18:15:47.503293   63216 logs.go:276] 0 containers: []
	W0819 18:15:47.503302   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:47.503308   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:47.503390   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:47.535094   63216 cri.go:89] found id: ""
	I0819 18:15:47.535125   63216 logs.go:276] 0 containers: []
	W0819 18:15:47.535133   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:47.535139   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:47.535209   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:47.569214   63216 cri.go:89] found id: ""
	I0819 18:15:47.569245   63216 logs.go:276] 0 containers: []
	W0819 18:15:47.569258   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:47.569266   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:47.569334   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:47.602258   63216 cri.go:89] found id: ""
	I0819 18:15:47.602287   63216 logs.go:276] 0 containers: []
	W0819 18:15:47.602296   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:47.602302   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:47.602350   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:47.635964   63216 cri.go:89] found id: ""
	I0819 18:15:47.635998   63216 logs.go:276] 0 containers: []
	W0819 18:15:47.636011   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:47.636020   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:47.636089   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:47.668198   63216 cri.go:89] found id: ""
	I0819 18:15:47.668229   63216 logs.go:276] 0 containers: []
	W0819 18:15:47.668240   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:47.668247   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:47.668303   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:47.702165   63216 cri.go:89] found id: ""
	I0819 18:15:47.702206   63216 logs.go:276] 0 containers: []
	W0819 18:15:47.702216   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:47.702227   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:47.702244   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:47.714314   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:47.714344   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:47.781289   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:47.781309   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:47.781326   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:47.865381   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:47.865421   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:47.902926   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:47.902966   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:50.455083   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:50.467702   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:50.467768   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:50.517276   63216 cri.go:89] found id: ""
	I0819 18:15:50.517306   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.517315   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:50.517323   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:50.517399   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:50.550878   63216 cri.go:89] found id: ""
	I0819 18:15:50.550905   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.550914   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:50.550921   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:50.550984   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:50.583515   63216 cri.go:89] found id: ""
	I0819 18:15:50.583543   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.583553   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:50.583560   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:50.583622   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:50.618265   63216 cri.go:89] found id: ""
	I0819 18:15:50.618291   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.618299   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:50.618304   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:50.618362   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:50.653436   63216 cri.go:89] found id: ""
	I0819 18:15:50.653461   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.653469   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:50.653476   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:50.653534   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:50.687715   63216 cri.go:89] found id: ""
	I0819 18:15:50.687745   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.687757   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:50.687764   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:50.687885   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:50.721235   63216 cri.go:89] found id: ""
	I0819 18:15:50.721262   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.721272   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:50.721280   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:50.721328   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:50.754095   63216 cri.go:89] found id: ""
	I0819 18:15:50.754126   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.754134   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:50.754143   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:50.754156   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:50.805661   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:50.805698   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:50.819495   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:50.819536   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:50.887296   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:50.887317   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:50.887334   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:50.966224   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:50.966261   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.508007   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:53.520812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:53.520870   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:53.552790   63216 cri.go:89] found id: ""
	I0819 18:15:53.552816   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.552823   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:53.552829   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:53.552873   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:53.585937   63216 cri.go:89] found id: ""
	I0819 18:15:53.585969   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.585978   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:53.585986   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:53.586057   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:53.618890   63216 cri.go:89] found id: ""
	I0819 18:15:53.618915   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.618922   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:53.618928   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:53.618975   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:53.650045   63216 cri.go:89] found id: ""
	I0819 18:15:53.650069   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.650076   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:53.650082   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:53.650138   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:53.685069   63216 cri.go:89] found id: ""
	I0819 18:15:53.685097   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.685106   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:53.685113   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:53.685179   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:53.717742   63216 cri.go:89] found id: ""
	I0819 18:15:53.717771   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.717778   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:53.717784   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:53.717832   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:53.747768   63216 cri.go:89] found id: ""
	I0819 18:15:53.747798   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.747806   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:53.747812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:53.747858   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:53.779973   63216 cri.go:89] found id: ""
	I0819 18:15:53.779999   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.780006   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:53.780016   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:53.780027   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.815619   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:53.815656   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:53.866767   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:53.866802   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:53.879693   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:53.879721   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:53.947610   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:53.947640   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:53.947659   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:56.524639   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:56.537312   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:56.537395   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:56.569913   63216 cri.go:89] found id: ""
	I0819 18:15:56.569958   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.569965   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:56.569972   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:56.570031   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:56.602119   63216 cri.go:89] found id: ""
	I0819 18:15:56.602145   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.602152   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:56.602158   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:56.602211   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:56.634864   63216 cri.go:89] found id: ""
	I0819 18:15:56.634900   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.634910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:56.634920   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:56.634982   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:56.667099   63216 cri.go:89] found id: ""
	I0819 18:15:56.667127   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.667136   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:56.667145   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:56.667194   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:56.703539   63216 cri.go:89] found id: ""
	I0819 18:15:56.703562   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.703571   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:56.703576   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:56.703637   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.734668   63216 cri.go:89] found id: ""
	I0819 18:15:56.734691   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.734698   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:56.734703   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:56.734747   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:56.768840   63216 cri.go:89] found id: ""
	I0819 18:15:56.768866   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.768874   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:56.768880   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:56.768925   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:56.800337   63216 cri.go:89] found id: ""
	I0819 18:15:56.800366   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.800375   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:56.800384   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:56.800398   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:56.866036   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:56.866060   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:56.866072   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:56.955372   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:56.955414   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:57.004450   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:57.004477   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:57.057284   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:57.057320   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.570450   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:59.583640   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:59.583729   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:59.617911   63216 cri.go:89] found id: ""
	I0819 18:15:59.617943   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.617954   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:59.617963   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:59.618014   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:59.650239   63216 cri.go:89] found id: ""
	I0819 18:15:59.650265   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.650274   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:59.650279   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:59.650329   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:59.684877   63216 cri.go:89] found id: ""
	I0819 18:15:59.684902   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.684910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:59.684916   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:59.684977   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:59.717378   63216 cri.go:89] found id: ""
	I0819 18:15:59.717402   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.717414   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:59.717428   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:59.717484   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:59.748937   63216 cri.go:89] found id: ""
	I0819 18:15:59.748968   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.748980   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:59.748989   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:59.749058   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:59.781784   63216 cri.go:89] found id: ""
	I0819 18:15:59.781819   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.781830   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:59.781837   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:59.781899   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:59.815593   63216 cri.go:89] found id: ""
	I0819 18:15:59.815626   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.815637   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:59.815645   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:59.815709   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:59.847540   63216 cri.go:89] found id: ""
	I0819 18:15:59.847571   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.847581   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:59.847595   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:59.847609   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.860256   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:59.860292   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:59.931873   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:59.931900   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:59.931915   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:00.011897   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:00.011938   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:00.047600   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:00.047628   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.599457   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:02.617040   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:02.617112   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:02.658148   63216 cri.go:89] found id: ""
	I0819 18:16:02.658173   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.658181   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:02.658187   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:02.658256   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:02.711833   63216 cri.go:89] found id: ""
	I0819 18:16:02.711873   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.711882   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:02.711889   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:02.711945   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:02.746611   63216 cri.go:89] found id: ""
	I0819 18:16:02.746644   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.746652   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:02.746658   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:02.746712   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:02.781731   63216 cri.go:89] found id: ""
	I0819 18:16:02.781757   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.781764   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:02.781771   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:02.781827   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:02.814215   63216 cri.go:89] found id: ""
	I0819 18:16:02.814242   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.814253   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:02.814260   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:02.814320   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:02.848767   63216 cri.go:89] found id: ""
	I0819 18:16:02.848804   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.848815   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:02.848823   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:02.848881   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:02.882890   63216 cri.go:89] found id: ""
	I0819 18:16:02.882913   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.882920   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:02.882927   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:02.882983   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:02.918333   63216 cri.go:89] found id: ""
	I0819 18:16:02.918362   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.918370   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:02.918393   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:02.918405   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.966994   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:02.967024   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:02.980377   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:02.980437   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:03.045097   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:03.045127   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:03.045145   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:03.126682   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:03.126727   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:05.662843   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:05.680724   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.680811   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.719205   63216 cri.go:89] found id: ""
	I0819 18:16:05.719227   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.719234   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:05.719240   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.719283   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.764548   63216 cri.go:89] found id: ""
	I0819 18:16:05.764577   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.764587   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:05.764593   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.764644   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.800478   63216 cri.go:89] found id: ""
	I0819 18:16:05.800503   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.800521   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:05.800527   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.800582   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.837403   63216 cri.go:89] found id: ""
	I0819 18:16:05.837432   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.837443   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:05.837450   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.837506   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.869330   63216 cri.go:89] found id: ""
	I0819 18:16:05.869357   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.869367   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:05.869375   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.869463   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.900354   63216 cri.go:89] found id: ""
	I0819 18:16:05.900382   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.900393   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:05.900401   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.900457   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.933899   63216 cri.go:89] found id: ""
	I0819 18:16:05.933926   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.933937   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.933944   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:05.934003   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:05.968393   63216 cri.go:89] found id: ""
	I0819 18:16:05.968421   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.968430   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:05.968441   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:05.968458   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:05.980957   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:05.980988   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:06.045310   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:06.045359   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:06.045375   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.124351   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.124389   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.168102   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.168130   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:08.718499   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:08.731535   63216 kubeadm.go:597] duration metric: took 4m4.252819836s to restartPrimaryControlPlane
	W0819 18:16:08.731622   63216 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:08.731651   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:13.540438   63216 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.808760826s)
	I0819 18:16:13.540508   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:13.555141   63216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:16:13.565159   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:16:13.575671   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:16:13.575689   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:16:13.575743   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:16:13.586181   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:16:13.586388   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:16:13.597239   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:16:13.606788   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:16:13.606857   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:16:13.616964   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.627128   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:16:13.627195   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.637263   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:16:13.646834   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:16:13.646898   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:16:13.657566   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:16:13.887585   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:18:09.974002   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:18:09.974108   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:18:09.975602   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:18:09.975650   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:18:09.975736   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:18:09.975861   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:18:09.975993   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:18:09.976086   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:18:09.978023   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:18:09.978100   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:18:09.978157   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:18:09.978230   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:18:09.978281   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:18:09.978358   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:18:09.978408   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:18:09.978466   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:18:09.978529   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:18:09.978645   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:18:09.978758   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:18:09.978816   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:18:09.978890   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:18:09.978973   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:18:09.979046   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:18:09.979138   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:18:09.979191   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:18:09.979339   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:18:09.979438   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:18:09.979503   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:18:09.979595   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:18:09.981931   63216 out.go:235]   - Booting up control plane ...
	I0819 18:18:09.982014   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:18:09.982087   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:18:09.982142   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:18:09.982213   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:18:09.982378   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:18:09.982432   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:18:09.982491   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982715   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982914   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982996   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983204   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983268   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983424   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983485   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983656   63216 kubeadm.go:310] 
	I0819 18:18:09.983705   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:18:09.983747   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:18:09.983754   63216 kubeadm.go:310] 
	I0819 18:18:09.983788   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:18:09.983818   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:18:09.983957   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:18:09.983982   63216 kubeadm.go:310] 
	I0819 18:18:09.984089   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:18:09.984119   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:18:09.984175   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:18:09.984186   63216 kubeadm.go:310] 
	I0819 18:18:09.984277   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:18:09.984372   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:18:09.984378   63216 kubeadm.go:310] 
	I0819 18:18:09.984474   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:18:09.984552   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:18:09.984621   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:18:09.984699   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:18:09.984762   63216 kubeadm.go:310] 
	W0819 18:18:09.984832   63216 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 18:18:09.984873   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:18:10.439037   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:18:10.453739   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:18:10.463241   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:18:10.463262   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:18:10.463313   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:18:10.472407   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:18:10.472467   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:18:10.481297   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:18:10.489478   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:18:10.489542   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:18:10.498042   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.506373   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:18:10.506433   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.515158   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:18:10.523412   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:18:10.523483   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:18:10.532060   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:18:10.746836   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:20:06.430174   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:20:06.430256   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:20:06.431894   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:20:06.431968   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:20:06.432060   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:20:06.432203   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:20:06.432334   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:20:06.432440   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:20:06.434250   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:20:06.434349   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:20:06.434444   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:20:06.434563   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:20:06.434623   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:20:06.434717   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:20:06.434805   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:20:06.434894   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:20:06.434974   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:20:06.435052   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:20:06.435135   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:20:06.435204   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:20:06.435288   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:20:06.435365   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:20:06.435421   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:20:06.435474   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:20:06.435531   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:20:06.435689   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:20:06.435781   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:20:06.435827   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:20:06.435886   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:20:06.437538   63216 out.go:235]   - Booting up control plane ...
	I0819 18:20:06.437678   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:20:06.437771   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:20:06.437852   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:20:06.437928   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:20:06.438063   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:20:06.438105   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:20:06.438164   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438342   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438416   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438568   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438637   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438821   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438902   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439167   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439264   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439458   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439472   63216 kubeadm.go:310] 
	I0819 18:20:06.439514   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:20:06.439547   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:20:06.439553   63216 kubeadm.go:310] 
	I0819 18:20:06.439583   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:20:06.439626   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:20:06.439732   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:20:06.439749   63216 kubeadm.go:310] 
	I0819 18:20:06.439873   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:20:06.439915   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:20:06.439944   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:20:06.439952   63216 kubeadm.go:310] 
	I0819 18:20:06.440039   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:20:06.440106   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:20:06.440113   63216 kubeadm.go:310] 
	I0819 18:20:06.440252   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:20:06.440329   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:20:06.440392   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:20:06.440458   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:20:06.440521   63216 kubeadm.go:394] duration metric: took 8m2.012853316s to StartCluster
	I0819 18:20:06.440524   63216 kubeadm.go:310] 
	I0819 18:20:06.440559   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:20:06.440610   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:20:06.481255   63216 cri.go:89] found id: ""
	I0819 18:20:06.481285   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.481297   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:20:06.481305   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:20:06.481364   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:20:06.516769   63216 cri.go:89] found id: ""
	I0819 18:20:06.516801   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.516811   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:20:06.516818   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:20:06.516933   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:20:06.551964   63216 cri.go:89] found id: ""
	I0819 18:20:06.551998   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.552006   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:20:06.552014   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:20:06.552108   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:20:06.586084   63216 cri.go:89] found id: ""
	I0819 18:20:06.586115   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.586124   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:20:06.586131   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:20:06.586189   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:20:06.620732   63216 cri.go:89] found id: ""
	I0819 18:20:06.620773   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.620785   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:20:06.620792   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:20:06.620843   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:20:06.659731   63216 cri.go:89] found id: ""
	I0819 18:20:06.659762   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.659772   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:20:06.659780   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:20:06.659846   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:20:06.694223   63216 cri.go:89] found id: ""
	I0819 18:20:06.694257   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.694267   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:20:06.694275   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:20:06.694337   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:20:06.727474   63216 cri.go:89] found id: ""
	I0819 18:20:06.727508   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.727518   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:20:06.727528   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:20:06.727538   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:20:06.778006   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:20:06.778041   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:20:06.792059   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:20:06.792089   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:20:06.863596   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:20:06.863625   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:20:06.863637   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:20:06.979710   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:20:06.979752   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 18:20:07.030879   63216 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 18:20:07.030930   63216 out.go:270] * 
	* 
	W0819 18:20:07.031004   63216 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.031025   63216 out.go:270] * 
	* 
	W0819 18:20:07.031896   63216 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:20:07.035220   63216 out.go:201] 
	W0819 18:20:07.036384   63216 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.036435   63216 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 18:20:07.036466   63216 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 18:20:07.037783   63216 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-079123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 2 (223.303642ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-079123 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-975771                              | cert-expiration-975771       | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:06 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-233969                  | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-233969                                   | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233045             | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079123        | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233045                  | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-813424       | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:16 UTC |
	|         | default-k8s-diff-port-813424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079123             | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-233045 image list                           | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-814719 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | disable-driver-mounts-814719                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-306581            | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC | 19 Aug 24 18:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-306581                 | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:15:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:15:52.756356   66229 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:15:52.756664   66229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:15:52.756675   66229 out.go:358] Setting ErrFile to fd 2...
	I0819 18:15:52.756680   66229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:15:52.756881   66229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:15:52.757409   66229 out.go:352] Setting JSON to false
	I0819 18:15:52.758366   66229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7098,"bootTime":1724084255,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:15:52.758430   66229 start.go:139] virtualization: kvm guest
	I0819 18:15:52.760977   66229 out.go:177] * [embed-certs-306581] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:15:52.762479   66229 notify.go:220] Checking for updates...
	I0819 18:15:52.762504   66229 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:15:52.763952   66229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:15:52.765453   66229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:15:52.766810   66229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:15:52.768135   66229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:15:52.769369   66229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:15:52.771017   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:52.771443   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.771504   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.786463   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0819 18:15:52.786925   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.787501   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.787523   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.787800   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.787975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.788239   66229 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:15:52.788527   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.788562   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.803703   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0819 18:15:52.804145   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.804609   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.804625   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.804962   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.805142   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.842707   66229 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:15:52.844070   66229 start.go:297] selected driver: kvm2
	I0819 18:15:52.844092   66229 start.go:901] validating driver "kvm2" against &{Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:52.844258   66229 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:15:52.844998   66229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:52.845085   66229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:15:52.860606   66229 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:15:52.861678   66229 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:15:52.861730   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:15:52.861742   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:15:52.861793   66229 start.go:340] cluster config:
	{Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:52.862003   66229 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:52.864173   66229 out.go:177] * Starting "embed-certs-306581" primary control-plane node in "embed-certs-306581" cluster
	I0819 18:15:52.865772   66229 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:15:52.865819   66229 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:15:52.865827   66229 cache.go:56] Caching tarball of preloaded images
	I0819 18:15:52.865902   66229 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:15:52.865913   66229 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:15:52.866012   66229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/config.json ...
	I0819 18:15:52.866250   66229 start.go:360] acquireMachinesLock for embed-certs-306581: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:15:52.866299   66229 start.go:364] duration metric: took 26.7µs to acquireMachinesLock for "embed-certs-306581"
	I0819 18:15:52.866311   66229 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:15:52.866316   66229 fix.go:54] fixHost starting: 
	I0819 18:15:52.866636   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.866671   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.883154   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0819 18:15:52.883648   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.884149   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.884170   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.884509   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.884710   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.884888   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:15:52.886632   66229 fix.go:112] recreateIfNeeded on embed-certs-306581: state=Running err=<nil>
	W0819 18:15:52.886653   66229 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:15:52.888856   66229 out.go:177] * Updating the running kvm2 "embed-certs-306581" VM ...
	I0819 18:15:50.375775   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:52.376597   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:50.455083   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:50.467702   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:50.467768   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:50.517276   63216 cri.go:89] found id: ""
	I0819 18:15:50.517306   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.517315   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:50.517323   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:50.517399   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:50.550878   63216 cri.go:89] found id: ""
	I0819 18:15:50.550905   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.550914   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:50.550921   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:50.550984   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:50.583515   63216 cri.go:89] found id: ""
	I0819 18:15:50.583543   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.583553   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:50.583560   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:50.583622   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:50.618265   63216 cri.go:89] found id: ""
	I0819 18:15:50.618291   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.618299   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:50.618304   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:50.618362   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:50.653436   63216 cri.go:89] found id: ""
	I0819 18:15:50.653461   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.653469   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:50.653476   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:50.653534   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:50.687715   63216 cri.go:89] found id: ""
	I0819 18:15:50.687745   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.687757   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:50.687764   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:50.687885   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:50.721235   63216 cri.go:89] found id: ""
	I0819 18:15:50.721262   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.721272   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:50.721280   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:50.721328   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:50.754095   63216 cri.go:89] found id: ""
	I0819 18:15:50.754126   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.754134   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:50.754143   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:50.754156   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:50.805661   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:50.805698   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:50.819495   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:50.819536   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:50.887296   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:50.887317   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:50.887334   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:50.966224   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:50.966261   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.508007   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:53.520812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:53.520870   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:53.552790   63216 cri.go:89] found id: ""
	I0819 18:15:53.552816   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.552823   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:53.552829   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:53.552873   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:53.585937   63216 cri.go:89] found id: ""
	I0819 18:15:53.585969   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.585978   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:53.585986   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:53.586057   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:53.618890   63216 cri.go:89] found id: ""
	I0819 18:15:53.618915   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.618922   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:53.618928   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:53.618975   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:53.650045   63216 cri.go:89] found id: ""
	I0819 18:15:53.650069   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.650076   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:53.650082   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:53.650138   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:53.685069   63216 cri.go:89] found id: ""
	I0819 18:15:53.685097   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.685106   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:53.685113   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:53.685179   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:53.717742   63216 cri.go:89] found id: ""
	I0819 18:15:53.717771   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.717778   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:53.717784   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:53.717832   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:53.747768   63216 cri.go:89] found id: ""
	I0819 18:15:53.747798   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.747806   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:53.747812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:53.747858   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:53.779973   63216 cri.go:89] found id: ""
	I0819 18:15:53.779999   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.780006   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:53.780016   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:53.780027   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.815619   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:53.815656   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:53.866767   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:53.866802   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:53.879693   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:53.879721   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:53.947610   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:53.947640   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:53.947659   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:52.172237   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:54.172434   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:52.890101   66229 machine.go:93] provisionDockerMachine start ...
	I0819 18:15:52.890131   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.890374   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:15:52.892900   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:15:52.893405   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:12:30 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:15:52.893431   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:15:52.893613   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:15:52.893796   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:15:52.893979   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:15:52.894149   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:15:52.894328   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:52.894580   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:15:52.894597   66229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:15:55.789130   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:15:54.376799   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:56.884787   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:56.524639   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:56.537312   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:56.537395   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:56.569913   63216 cri.go:89] found id: ""
	I0819 18:15:56.569958   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.569965   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:56.569972   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:56.570031   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:56.602119   63216 cri.go:89] found id: ""
	I0819 18:15:56.602145   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.602152   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:56.602158   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:56.602211   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:56.634864   63216 cri.go:89] found id: ""
	I0819 18:15:56.634900   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.634910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:56.634920   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:56.634982   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:56.667099   63216 cri.go:89] found id: ""
	I0819 18:15:56.667127   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.667136   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:56.667145   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:56.667194   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:56.703539   63216 cri.go:89] found id: ""
	I0819 18:15:56.703562   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.703571   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:56.703576   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:56.703637   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.734668   63216 cri.go:89] found id: ""
	I0819 18:15:56.734691   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.734698   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:56.734703   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:56.734747   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:56.768840   63216 cri.go:89] found id: ""
	I0819 18:15:56.768866   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.768874   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:56.768880   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:56.768925   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:56.800337   63216 cri.go:89] found id: ""
	I0819 18:15:56.800366   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.800375   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:56.800384   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:56.800398   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:56.866036   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:56.866060   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:56.866072   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:56.955372   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:56.955414   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:57.004450   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:57.004477   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:57.057284   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:57.057320   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.570450   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:59.583640   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:59.583729   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:59.617911   63216 cri.go:89] found id: ""
	I0819 18:15:59.617943   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.617954   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:59.617963   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:59.618014   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:59.650239   63216 cri.go:89] found id: ""
	I0819 18:15:59.650265   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.650274   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:59.650279   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:59.650329   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:59.684877   63216 cri.go:89] found id: ""
	I0819 18:15:59.684902   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.684910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:59.684916   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:59.684977   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:59.717378   63216 cri.go:89] found id: ""
	I0819 18:15:59.717402   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.717414   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:59.717428   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:59.717484   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:59.748937   63216 cri.go:89] found id: ""
	I0819 18:15:59.748968   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.748980   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:59.748989   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:59.749058   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.672222   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:59.171375   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:58.861002   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:15:59.375951   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:01.376193   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:03.376512   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:59.781784   63216 cri.go:89] found id: ""
	I0819 18:15:59.781819   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.781830   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:59.781837   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:59.781899   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:59.815593   63216 cri.go:89] found id: ""
	I0819 18:15:59.815626   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.815637   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:59.815645   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:59.815709   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:59.847540   63216 cri.go:89] found id: ""
	I0819 18:15:59.847571   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.847581   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:59.847595   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:59.847609   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.860256   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:59.860292   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:59.931873   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:59.931900   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:59.931915   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:00.011897   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:00.011938   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:00.047600   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:00.047628   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.599457   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:02.617040   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:02.617112   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:02.658148   63216 cri.go:89] found id: ""
	I0819 18:16:02.658173   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.658181   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:02.658187   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:02.658256   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:02.711833   63216 cri.go:89] found id: ""
	I0819 18:16:02.711873   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.711882   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:02.711889   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:02.711945   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:02.746611   63216 cri.go:89] found id: ""
	I0819 18:16:02.746644   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.746652   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:02.746658   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:02.746712   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:02.781731   63216 cri.go:89] found id: ""
	I0819 18:16:02.781757   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.781764   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:02.781771   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:02.781827   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:02.814215   63216 cri.go:89] found id: ""
	I0819 18:16:02.814242   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.814253   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:02.814260   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:02.814320   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:02.848767   63216 cri.go:89] found id: ""
	I0819 18:16:02.848804   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.848815   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:02.848823   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:02.848881   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:02.882890   63216 cri.go:89] found id: ""
	I0819 18:16:02.882913   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.882920   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:02.882927   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:02.882983   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:02.918333   63216 cri.go:89] found id: ""
	I0819 18:16:02.918362   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.918370   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:02.918393   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:02.918405   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.966994   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:02.967024   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:02.980377   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:02.980437   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:03.045097   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:03.045127   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:03.045145   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:03.126682   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:03.126727   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:01.671492   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:04.171471   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:04.941029   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:05.376677   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:05.376705   62749 pod_ready.go:82] duration metric: took 4m0.006404877s for pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:05.376714   62749 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 18:16:05.376720   62749 pod_ready.go:39] duration metric: took 4m6.335802515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:05.376735   62749 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:16:05.376775   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.376822   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.419678   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:05.419719   62749 cri.go:89] found id: ""
	I0819 18:16:05.419728   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:05.419801   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.424210   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.424271   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.459501   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:05.459527   62749 cri.go:89] found id: ""
	I0819 18:16:05.459535   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:05.459578   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.463654   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.463711   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.497591   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:05.497613   62749 cri.go:89] found id: ""
	I0819 18:16:05.497620   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:05.497667   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.501207   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.501274   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.535112   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:05.535141   62749 cri.go:89] found id: ""
	I0819 18:16:05.535150   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:05.535215   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.538855   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.538909   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.573744   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:05.573769   62749 cri.go:89] found id: ""
	I0819 18:16:05.573776   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:05.573824   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.577981   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.578045   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.616545   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:05.616569   62749 cri.go:89] found id: ""
	I0819 18:16:05.616577   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:05.616630   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.620549   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.620597   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.662743   62749 cri.go:89] found id: ""
	I0819 18:16:05.662781   62749 logs.go:276] 0 containers: []
	W0819 18:16:05.662792   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.662800   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:05.662855   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:05.711433   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:05.711456   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:05.711463   62749 cri.go:89] found id: ""
	I0819 18:16:05.711472   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:05.711536   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.716476   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.720240   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:05.720261   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.261474   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:06.261523   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:06.384895   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:06.384927   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:06.421665   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:06.421700   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:06.461866   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:06.461900   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:06.496543   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:06.496570   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:06.551478   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:06.551518   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:06.586858   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.586886   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.625272   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.625300   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:06.697922   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:06.697960   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:06.711624   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:06.711658   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:06.752648   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:06.752677   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:06.796805   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:06.796836   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:05.662843   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:05.680724   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.680811   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.719205   63216 cri.go:89] found id: ""
	I0819 18:16:05.719227   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.719234   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:05.719240   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.719283   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.764548   63216 cri.go:89] found id: ""
	I0819 18:16:05.764577   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.764587   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:05.764593   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.764644   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.800478   63216 cri.go:89] found id: ""
	I0819 18:16:05.800503   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.800521   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:05.800527   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.800582   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.837403   63216 cri.go:89] found id: ""
	I0819 18:16:05.837432   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.837443   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:05.837450   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.837506   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.869330   63216 cri.go:89] found id: ""
	I0819 18:16:05.869357   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.869367   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:05.869375   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.869463   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.900354   63216 cri.go:89] found id: ""
	I0819 18:16:05.900382   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.900393   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:05.900401   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.900457   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.933899   63216 cri.go:89] found id: ""
	I0819 18:16:05.933926   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.933937   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.933944   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:05.934003   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:05.968393   63216 cri.go:89] found id: ""
	I0819 18:16:05.968421   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.968430   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:05.968441   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:05.968458   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:05.980957   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:05.980988   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:06.045310   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:06.045359   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:06.045375   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.124351   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.124389   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.168102   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.168130   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:08.718499   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:08.731535   63216 kubeadm.go:597] duration metric: took 4m4.252819836s to restartPrimaryControlPlane
	W0819 18:16:08.731622   63216 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:08.731651   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:06.172578   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:08.671110   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:08.013019   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:09.338729   62749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:09.355014   62749 api_server.go:72] duration metric: took 4m18.036977131s to wait for apiserver process to appear ...
	I0819 18:16:09.355046   62749 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:16:09.355086   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:09.355148   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:09.390088   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:09.390107   62749 cri.go:89] found id: ""
	I0819 18:16:09.390115   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:09.390161   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.393972   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:09.394024   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:09.426919   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:09.426943   62749 cri.go:89] found id: ""
	I0819 18:16:09.426953   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:09.427007   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.430685   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:09.430755   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:09.465843   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:09.465867   62749 cri.go:89] found id: ""
	I0819 18:16:09.465876   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:09.465936   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.469990   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:09.470057   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:09.503690   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:09.503716   62749 cri.go:89] found id: ""
	I0819 18:16:09.503727   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:09.503789   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.507731   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:09.507791   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:09.541067   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:09.541098   62749 cri.go:89] found id: ""
	I0819 18:16:09.541108   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:09.541169   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.546503   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:09.546568   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:09.587861   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:09.587888   62749 cri.go:89] found id: ""
	I0819 18:16:09.587898   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:09.587960   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.593765   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:09.593831   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:09.628426   62749 cri.go:89] found id: ""
	I0819 18:16:09.628456   62749 logs.go:276] 0 containers: []
	W0819 18:16:09.628464   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:09.628470   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:09.628529   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:09.666596   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:09.666622   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:09.666628   62749 cri.go:89] found id: ""
	I0819 18:16:09.666636   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:09.666688   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.670929   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.674840   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:09.674863   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:09.708286   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:09.708313   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:09.739212   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:09.739234   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:10.171487   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:10.171535   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:10.208985   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:10.209025   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:10.222001   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:10.222028   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:10.267193   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:10.267225   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:10.300082   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:10.300110   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:10.333403   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:10.333434   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:10.371961   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:10.371989   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:10.425550   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:10.425586   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:10.500742   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:10.500796   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:10.602484   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:10.602518   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:13.149769   62749 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8444/healthz ...
	I0819 18:16:13.154238   62749 api_server.go:279] https://192.168.61.243:8444/healthz returned 200:
	ok
	I0819 18:16:13.155139   62749 api_server.go:141] control plane version: v1.31.0
	I0819 18:16:13.155154   62749 api_server.go:131] duration metric: took 3.800101993s to wait for apiserver health ...
	I0819 18:16:13.155161   62749 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:16:13.155180   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:13.155232   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:13.194723   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:13.194749   62749 cri.go:89] found id: ""
	I0819 18:16:13.194759   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:13.194811   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.198645   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:13.198703   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:13.236332   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:13.236405   62749 cri.go:89] found id: ""
	I0819 18:16:13.236418   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:13.236473   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.240682   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:13.240764   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:13.277257   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:13.277283   62749 cri.go:89] found id: ""
	I0819 18:16:13.277290   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:13.277339   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.281458   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:13.281516   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:13.319419   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:13.319444   62749 cri.go:89] found id: ""
	I0819 18:16:13.319453   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:13.319508   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.323377   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:13.323444   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:13.357320   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:13.357344   62749 cri.go:89] found id: ""
	I0819 18:16:13.357353   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:13.357417   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.361505   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:13.361582   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:13.396379   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:13.396396   62749 cri.go:89] found id: ""
	I0819 18:16:13.396403   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:13.396457   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.400372   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:13.400442   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:13.433520   62749 cri.go:89] found id: ""
	I0819 18:16:13.433551   62749 logs.go:276] 0 containers: []
	W0819 18:16:13.433561   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:13.433569   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:13.433629   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:13.467382   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:13.467411   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:13.467418   62749 cri.go:89] found id: ""
	I0819 18:16:13.467427   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:13.467486   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.471371   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.474905   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:13.474924   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:13.547564   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:13.547596   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:13.593702   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:13.593731   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:13.629610   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:13.629634   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:13.669337   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:13.669372   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:13.729986   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:13.730012   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:13.766424   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:13.766459   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:13.806677   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:13.806702   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:13.540438   63216 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.808760826s)
	I0819 18:16:13.540508   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:13.555141   63216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:16:13.565159   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:16:13.575671   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:16:13.575689   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:16:13.575743   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:16:13.586181   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:16:13.586388   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:16:13.597239   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:16:13.606788   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:16:13.606857   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:16:13.616964   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.627128   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:16:13.627195   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.637263   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:16:13.646834   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:16:13.646898   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:16:13.657566   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:16:13.887585   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:16:11.171886   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:13.672521   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:14.199046   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:14.199103   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:14.213508   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:14.213537   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:14.341980   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:14.342017   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:14.389817   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:14.389853   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:14.425890   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:14.425928   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:16.991182   62749 system_pods.go:59] 8 kube-system pods found
	I0819 18:16:16.991211   62749 system_pods.go:61] "coredns-6f6b679f8f-4jvnz" [d81201db-1102-436b-ac29-dd201584de2d] Running
	I0819 18:16:16.991217   62749 system_pods.go:61] "etcd-default-k8s-diff-port-813424" [c9b60af5-479e-40b8-af56-2b1daef22843] Running
	I0819 18:16:16.991221   62749 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-813424" [b2f825b3-375c-46fb-9714-b0ed0be6ea51] Running
	I0819 18:16:16.991225   62749 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-813424" [c691a1e5-c01d-4bb1-8e37-c542524f9544] Running
	I0819 18:16:16.991229   62749 system_pods.go:61] "kube-proxy-j4x48" [886f5fe5-070e-419c-a9bb-5b95f7496717] Running
	I0819 18:16:16.991232   62749 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-813424" [b7bac76d-1783-40f1-adb2-75174ec8486e] Running
	I0819 18:16:16.991239   62749 system_pods.go:61] "metrics-server-6867b74b74-tp742" [aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:16:16.991243   62749 system_pods.go:61] "storage-provisioner" [658a37e1-39b6-4fa9-8f23-71518ebda8dc] Running
	I0819 18:16:16.991250   62749 system_pods.go:74] duration metric: took 3.836084784s to wait for pod list to return data ...
	I0819 18:16:16.991257   62749 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:16:16.993181   62749 default_sa.go:45] found service account: "default"
	I0819 18:16:16.993201   62749 default_sa.go:55] duration metric: took 1.93729ms for default service account to be created ...
	I0819 18:16:16.993208   62749 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:16:16.997803   62749 system_pods.go:86] 8 kube-system pods found
	I0819 18:16:16.997825   62749 system_pods.go:89] "coredns-6f6b679f8f-4jvnz" [d81201db-1102-436b-ac29-dd201584de2d] Running
	I0819 18:16:16.997830   62749 system_pods.go:89] "etcd-default-k8s-diff-port-813424" [c9b60af5-479e-40b8-af56-2b1daef22843] Running
	I0819 18:16:16.997835   62749 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-813424" [b2f825b3-375c-46fb-9714-b0ed0be6ea51] Running
	I0819 18:16:16.997840   62749 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-813424" [c691a1e5-c01d-4bb1-8e37-c542524f9544] Running
	I0819 18:16:16.997844   62749 system_pods.go:89] "kube-proxy-j4x48" [886f5fe5-070e-419c-a9bb-5b95f7496717] Running
	I0819 18:16:16.997848   62749 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-813424" [b7bac76d-1783-40f1-adb2-75174ec8486e] Running
	I0819 18:16:16.997854   62749 system_pods.go:89] "metrics-server-6867b74b74-tp742" [aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:16:16.997861   62749 system_pods.go:89] "storage-provisioner" [658a37e1-39b6-4fa9-8f23-71518ebda8dc] Running
	I0819 18:16:16.997868   62749 system_pods.go:126] duration metric: took 4.655661ms to wait for k8s-apps to be running ...
	I0819 18:16:16.997877   62749 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:16:16.997917   62749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:17.013524   62749 system_svc.go:56] duration metric: took 15.634104ms WaitForService to wait for kubelet
	I0819 18:16:17.013559   62749 kubeadm.go:582] duration metric: took 4m25.695525816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:16:17.013585   62749 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:16:17.016278   62749 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:16:17.016301   62749 node_conditions.go:123] node cpu capacity is 2
	I0819 18:16:17.016315   62749 node_conditions.go:105] duration metric: took 2.723578ms to run NodePressure ...
	I0819 18:16:17.016326   62749 start.go:241] waiting for startup goroutines ...
	I0819 18:16:17.016336   62749 start.go:246] waiting for cluster config update ...
	I0819 18:16:17.016351   62749 start.go:255] writing updated cluster config ...
	I0819 18:16:17.016817   62749 ssh_runner.go:195] Run: rm -f paused
	I0819 18:16:17.063056   62749 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:16:17.065819   62749 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-813424" cluster and "default" namespace by default
	I0819 18:16:14.093007   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:17.164989   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:16.172074   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:18.670402   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:20.671024   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:22.671462   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:26.288975   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:25.175354   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:27.671452   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:29.671496   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:29.357082   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:31.671726   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:33.672458   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:35.437060   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:36.171920   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:38.172318   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:38.513064   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:40.670687   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:42.670858   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.671276   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.589000   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:47.660996   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:47.171302   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:49.171707   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:51.675414   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:53.665939   62137 pod_ready.go:82] duration metric: took 4m0.001066956s for pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:53.665969   62137 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 18:16:53.665994   62137 pod_ready.go:39] duration metric: took 4m12.464901403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:53.666051   62137 kubeadm.go:597] duration metric: took 4m20.502224967s to restartPrimaryControlPlane
	W0819 18:16:53.666114   62137 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:53.666143   62137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:53.740978   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:56.817027   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:02.892936   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:05.965053   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:12.048961   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:15.116969   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:19.922253   62137 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.256081543s)
	I0819 18:17:19.922334   62137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:19.937012   62137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:17:19.946269   62137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:17:19.955344   62137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:17:19.955363   62137 kubeadm.go:157] found existing configuration files:
	
	I0819 18:17:19.955405   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:17:19.963979   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:17:19.964039   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:17:19.972679   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:17:19.980890   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:17:19.980947   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:17:19.989705   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:17:19.998606   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:17:19.998664   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:17:20.007553   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:17:20.016136   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:17:20.016185   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:17:20.024827   62137 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:17:20.073205   62137 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:17:20.073284   62137 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:17:20.186906   62137 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:17:20.187034   62137 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:17:20.187125   62137 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:17:20.198750   62137 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:17:20.200704   62137 out.go:235]   - Generating certificates and keys ...
	I0819 18:17:20.200810   62137 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:17:20.200905   62137 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:17:20.201015   62137 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:17:20.201099   62137 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:17:20.201202   62137 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:17:20.201279   62137 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:17:20.201370   62137 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:17:20.201468   62137 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:17:20.201578   62137 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:17:20.201686   62137 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:17:20.201743   62137 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:17:20.201823   62137 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:17:20.386866   62137 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:17:20.483991   62137 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:17:20.575440   62137 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:17:20.704349   62137 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:17:20.834890   62137 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:17:20.835583   62137 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:17:20.839290   62137 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:17:21.197002   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:20.841232   62137 out.go:235]   - Booting up control plane ...
	I0819 18:17:20.841313   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:17:20.841374   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:17:20.841428   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:17:20.858185   62137 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:17:20.866369   62137 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:17:20.866447   62137 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:17:20.997302   62137 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:17:20.997435   62137 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:17:21.499506   62137 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041994ms
	I0819 18:17:21.499625   62137 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:17:26.501489   62137 kubeadm.go:310] [api-check] The API server is healthy after 5.002014094s
	I0819 18:17:26.514398   62137 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:17:26.534278   62137 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:17:26.557460   62137 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:17:26.557706   62137 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-233969 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:17:26.569142   62137 kubeadm.go:310] [bootstrap-token] Using token: 2skh80.c6u95wnw3x4gmagv
	I0819 18:17:24.273082   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:26.570814   62137 out.go:235]   - Configuring RBAC rules ...
	I0819 18:17:26.570940   62137 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:17:26.583073   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:17:26.592407   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:17:26.595488   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:17:26.599062   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:17:26.603754   62137 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:17:26.908245   62137 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:17:27.340277   62137 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:17:27.909394   62137 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:17:27.912696   62137 kubeadm.go:310] 
	I0819 18:17:27.912811   62137 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:17:27.912834   62137 kubeadm.go:310] 
	I0819 18:17:27.912953   62137 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:17:27.912965   62137 kubeadm.go:310] 
	I0819 18:17:27.912996   62137 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:17:27.913086   62137 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:17:27.913166   62137 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:17:27.913178   62137 kubeadm.go:310] 
	I0819 18:17:27.913246   62137 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:17:27.913266   62137 kubeadm.go:310] 
	I0819 18:17:27.913338   62137 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:17:27.913349   62137 kubeadm.go:310] 
	I0819 18:17:27.913422   62137 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:17:27.913527   62137 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:17:27.913613   62137 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:17:27.913622   62137 kubeadm.go:310] 
	I0819 18:17:27.913727   62137 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:17:27.913827   62137 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:17:27.913842   62137 kubeadm.go:310] 
	I0819 18:17:27.913934   62137 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2skh80.c6u95wnw3x4gmagv \
	I0819 18:17:27.914073   62137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:17:27.914112   62137 kubeadm.go:310] 	--control-plane 
	I0819 18:17:27.914121   62137 kubeadm.go:310] 
	I0819 18:17:27.914223   62137 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:17:27.914235   62137 kubeadm.go:310] 
	I0819 18:17:27.914353   62137 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2skh80.c6u95wnw3x4gmagv \
	I0819 18:17:27.914499   62137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:17:27.916002   62137 kubeadm.go:310] W0819 18:17:20.045306    3048 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:17:27.916280   62137 kubeadm.go:310] W0819 18:17:20.046268    3048 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:17:27.916390   62137 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:17:27.916417   62137 cni.go:84] Creating CNI manager for ""
	I0819 18:17:27.916426   62137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:17:27.918384   62137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:17:27.919646   62137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:17:27.930298   62137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:17:27.946332   62137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:17:27.946440   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:27.946462   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-233969 minikube.k8s.io/updated_at=2024_08_19T18_17_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=no-preload-233969 minikube.k8s.io/primary=true
	I0819 18:17:27.972836   62137 ops.go:34] apiserver oom_adj: -16
	I0819 18:17:28.134899   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:28.635909   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:29.135326   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:29.635339   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:30.135992   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:30.635626   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:31.135493   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:31.635632   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:32.135812   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:32.208229   62137 kubeadm.go:1113] duration metric: took 4.261865811s to wait for elevateKubeSystemPrivileges
	I0819 18:17:32.208254   62137 kubeadm.go:394] duration metric: took 4m59.094587246s to StartCluster
	I0819 18:17:32.208270   62137 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:17:32.208350   62137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:17:32.210604   62137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:17:32.210888   62137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:17:32.210967   62137 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:17:32.211052   62137 addons.go:69] Setting storage-provisioner=true in profile "no-preload-233969"
	I0819 18:17:32.211070   62137 addons.go:69] Setting default-storageclass=true in profile "no-preload-233969"
	I0819 18:17:32.211088   62137 addons.go:234] Setting addon storage-provisioner=true in "no-preload-233969"
	I0819 18:17:32.211084   62137 addons.go:69] Setting metrics-server=true in profile "no-preload-233969"
	W0819 18:17:32.211096   62137 addons.go:243] addon storage-provisioner should already be in state true
	I0819 18:17:32.211102   62137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-233969"
	I0819 18:17:32.211125   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.211126   62137 addons.go:234] Setting addon metrics-server=true in "no-preload-233969"
	W0819 18:17:32.211166   62137 addons.go:243] addon metrics-server should already be in state true
	I0819 18:17:32.211198   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.211124   62137 config.go:182] Loaded profile config "no-preload-233969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:17:32.211475   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211505   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.211589   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211601   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211619   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.211623   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.212714   62137 out.go:177] * Verifying Kubernetes components...
	I0819 18:17:32.214075   62137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:17:32.227207   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0819 18:17:32.227219   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0819 18:17:32.227615   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.227709   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.228122   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.228142   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.228216   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.228236   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.228543   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.228610   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.229074   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.229112   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.229120   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.229147   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.230316   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0819 18:17:32.230746   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.231408   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.231437   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.231812   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.232018   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.235965   62137 addons.go:234] Setting addon default-storageclass=true in "no-preload-233969"
	W0819 18:17:32.235986   62137 addons.go:243] addon default-storageclass should already be in state true
	I0819 18:17:32.236013   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.236365   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.236392   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.244668   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0819 18:17:32.245056   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.245506   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.245534   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.245816   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0819 18:17:32.245848   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.245989   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.246239   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.246795   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.246811   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.247182   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.247380   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.248517   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.249498   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.250817   62137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 18:17:32.251649   62137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:17:30.348988   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:32.252466   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 18:17:32.252483   62137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 18:17:32.252501   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.253309   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0819 18:17:32.253687   62137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:17:32.253701   62137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:17:32.253717   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.253828   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.254340   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.254352   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.254706   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.255288   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.255324   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.256274   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.256776   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.256796   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.256970   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.257109   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.257229   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.257348   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.257756   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.258132   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.258144   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.258384   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.258531   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.258663   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.258788   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.271706   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0819 18:17:32.272115   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.272558   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.272575   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.272875   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.273041   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.274711   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.274914   62137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:17:32.274924   62137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:17:32.274936   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.277689   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.278191   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.278246   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.278358   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.278533   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.278701   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.278847   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.423546   62137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:17:32.445680   62137 node_ready.go:35] waiting up to 6m0s for node "no-preload-233969" to be "Ready" ...
	I0819 18:17:32.471999   62137 node_ready.go:49] node "no-preload-233969" has status "Ready":"True"
	I0819 18:17:32.472028   62137 node_ready.go:38] duration metric: took 26.307315ms for node "no-preload-233969" to be "Ready" ...
	I0819 18:17:32.472041   62137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:32.478401   62137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:32.518483   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:17:32.568928   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 18:17:32.568953   62137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 18:17:32.592301   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:17:32.645484   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 18:17:32.645513   62137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 18:17:32.715522   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:17:32.715552   62137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 18:17:32.781693   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:17:33.756997   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.238477445s)
	I0819 18:17:33.757035   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757044   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757051   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.164710772s)
	I0819 18:17:33.757088   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757101   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757454   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757450   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757466   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757475   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757483   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757490   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757538   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757564   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757616   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757640   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757712   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757729   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757733   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757852   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757915   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757937   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.831562   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.831588   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.831891   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.831907   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928005   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146269845s)
	I0819 18:17:33.928064   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.928082   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.928391   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.928438   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.928452   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928465   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.928477   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.928809   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.928820   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.928835   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928851   62137 addons.go:475] Verifying addon metrics-server=true in "no-preload-233969"
	I0819 18:17:33.930974   62137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 18:17:33.932101   62137 addons.go:510] duration metric: took 1.72114773s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 18:17:34.486566   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:33.421045   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:36.984891   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:39.484617   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:39.500962   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:42.572983   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:41.990189   62137 pod_ready.go:93] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:41.990210   62137 pod_ready.go:82] duration metric: took 9.511780534s for pod "etcd-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.990221   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.997282   62137 pod_ready.go:93] pod "kube-apiserver-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:41.997301   62137 pod_ready.go:82] duration metric: took 7.074393ms for pod "kube-apiserver-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.997310   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.008757   62137 pod_ready.go:93] pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.008775   62137 pod_ready.go:82] duration metric: took 11.458424ms for pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.008785   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt5nj" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.017802   62137 pod_ready.go:93] pod "kube-proxy-pt5nj" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.017820   62137 pod_ready.go:82] duration metric: took 9.029628ms for pod "kube-proxy-pt5nj" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.017828   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.025402   62137 pod_ready.go:93] pod "kube-scheduler-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.025424   62137 pod_ready.go:82] duration metric: took 7.589229ms for pod "kube-scheduler-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.025433   62137 pod_ready.go:39] duration metric: took 9.553379252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:42.025451   62137 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:17:42.025508   62137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:17:42.043190   62137 api_server.go:72] duration metric: took 9.832267712s to wait for apiserver process to appear ...
	I0819 18:17:42.043214   62137 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:17:42.043231   62137 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I0819 18:17:42.051124   62137 api_server.go:279] https://192.168.50.8:8443/healthz returned 200:
	ok
	I0819 18:17:42.052367   62137 api_server.go:141] control plane version: v1.31.0
	I0819 18:17:42.052392   62137 api_server.go:131] duration metric: took 9.170652ms to wait for apiserver health ...
	I0819 18:17:42.052404   62137 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:17:42.187227   62137 system_pods.go:59] 9 kube-system pods found
	I0819 18:17:42.187254   62137 system_pods.go:61] "coredns-6f6b679f8f-kdrzp" [0db6b602-ca09-40f4-9492-93cbbf919aa7] Running
	I0819 18:17:42.187259   62137 system_pods.go:61] "coredns-6f6b679f8f-vb6dx" [0d39fac8-0d53-4380-a903-080414848e24] Running
	I0819 18:17:42.187263   62137 system_pods.go:61] "etcd-no-preload-233969" [4974eb7f-625a-43fb-b0c4-09cf5b4c0829] Running
	I0819 18:17:42.187267   62137 system_pods.go:61] "kube-apiserver-no-preload-233969" [eb582488-97d8-494c-95ba-6c9e15ff433b] Running
	I0819 18:17:42.187270   62137 system_pods.go:61] "kube-controller-manager-no-preload-233969" [cf978fab-7333-45bb-adc8-127b905707a7] Running
	I0819 18:17:42.187273   62137 system_pods.go:61] "kube-proxy-pt5nj" [dfd68c04-8a56-4a98-a1fb-21a194ebb5e3] Running
	I0819 18:17:42.187277   62137 system_pods.go:61] "kube-scheduler-no-preload-233969" [d08ef733-88fa-43da-92ac-aee974a6c2d2] Running
	I0819 18:17:42.187282   62137 system_pods.go:61] "metrics-server-6867b74b74-bfkkf" [00206622-fe4f-4f26-8f69-ac7fb6a39805] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:17:42.187285   62137 system_pods.go:61] "storage-provisioner" [67a50087-cb20-407b-9a87-03d04d230afb] Running
	I0819 18:17:42.187292   62137 system_pods.go:74] duration metric: took 134.882111ms to wait for pod list to return data ...
	I0819 18:17:42.187299   62137 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:17:42.382612   62137 default_sa.go:45] found service account: "default"
	I0819 18:17:42.382643   62137 default_sa.go:55] duration metric: took 195.337173ms for default service account to be created ...
	I0819 18:17:42.382652   62137 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:17:42.585988   62137 system_pods.go:86] 9 kube-system pods found
	I0819 18:17:42.586024   62137 system_pods.go:89] "coredns-6f6b679f8f-kdrzp" [0db6b602-ca09-40f4-9492-93cbbf919aa7] Running
	I0819 18:17:42.586032   62137 system_pods.go:89] "coredns-6f6b679f8f-vb6dx" [0d39fac8-0d53-4380-a903-080414848e24] Running
	I0819 18:17:42.586038   62137 system_pods.go:89] "etcd-no-preload-233969" [4974eb7f-625a-43fb-b0c4-09cf5b4c0829] Running
	I0819 18:17:42.586044   62137 system_pods.go:89] "kube-apiserver-no-preload-233969" [eb582488-97d8-494c-95ba-6c9e15ff433b] Running
	I0819 18:17:42.586049   62137 system_pods.go:89] "kube-controller-manager-no-preload-233969" [cf978fab-7333-45bb-adc8-127b905707a7] Running
	I0819 18:17:42.586056   62137 system_pods.go:89] "kube-proxy-pt5nj" [dfd68c04-8a56-4a98-a1fb-21a194ebb5e3] Running
	I0819 18:17:42.586062   62137 system_pods.go:89] "kube-scheduler-no-preload-233969" [d08ef733-88fa-43da-92ac-aee974a6c2d2] Running
	I0819 18:17:42.586072   62137 system_pods.go:89] "metrics-server-6867b74b74-bfkkf" [00206622-fe4f-4f26-8f69-ac7fb6a39805] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:17:42.586078   62137 system_pods.go:89] "storage-provisioner" [67a50087-cb20-407b-9a87-03d04d230afb] Running
	I0819 18:17:42.586089   62137 system_pods.go:126] duration metric: took 203.431371ms to wait for k8s-apps to be running ...
	I0819 18:17:42.586101   62137 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:17:42.586154   62137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:42.601268   62137 system_svc.go:56] duration metric: took 15.156104ms WaitForService to wait for kubelet
	I0819 18:17:42.601305   62137 kubeadm.go:582] duration metric: took 10.39038433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:17:42.601330   62137 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:17:42.783030   62137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:17:42.783058   62137 node_conditions.go:123] node cpu capacity is 2
	I0819 18:17:42.783069   62137 node_conditions.go:105] duration metric: took 181.734608ms to run NodePressure ...
	I0819 18:17:42.783080   62137 start.go:241] waiting for startup goroutines ...
	I0819 18:17:42.783087   62137 start.go:246] waiting for cluster config update ...
	I0819 18:17:42.783097   62137 start.go:255] writing updated cluster config ...
	I0819 18:17:42.783349   62137 ssh_runner.go:195] Run: rm -f paused
	I0819 18:17:42.831445   62137 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:17:42.833881   62137 out.go:177] * Done! kubectl is now configured to use "no-preload-233969" cluster and "default" namespace by default
	I0819 18:17:48.653035   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:51.725070   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:57.805043   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:00.881114   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:06.956979   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:09.974002   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:18:09.974108   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:18:09.975602   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:18:09.975650   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:18:09.975736   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:18:09.975861   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:18:09.975993   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:18:09.976086   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:18:09.978023   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:18:09.978100   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:18:09.978157   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:18:09.978230   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:18:09.978281   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:18:09.978358   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:18:09.978408   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:18:09.978466   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:18:09.978529   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:18:09.978645   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:18:09.978758   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:18:09.978816   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:18:09.978890   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:18:09.978973   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:18:09.979046   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:18:09.979138   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:18:09.979191   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:18:09.979339   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:18:09.979438   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:18:09.979503   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:18:09.979595   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:18:10.028995   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:09.981931   63216 out.go:235]   - Booting up control plane ...
	I0819 18:18:09.982014   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:18:09.982087   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:18:09.982142   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:18:09.982213   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:18:09.982378   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:18:09.982432   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:18:09.982491   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982715   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982914   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982996   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983204   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983268   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983424   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983485   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983656   63216 kubeadm.go:310] 
	I0819 18:18:09.983705   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:18:09.983747   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:18:09.983754   63216 kubeadm.go:310] 
	I0819 18:18:09.983788   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:18:09.983818   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:18:09.983957   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:18:09.983982   63216 kubeadm.go:310] 
	I0819 18:18:09.984089   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:18:09.984119   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:18:09.984175   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:18:09.984186   63216 kubeadm.go:310] 
	I0819 18:18:09.984277   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:18:09.984372   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:18:09.984378   63216 kubeadm.go:310] 
	I0819 18:18:09.984474   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:18:09.984552   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:18:09.984621   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:18:09.984699   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:18:09.984762   63216 kubeadm.go:310] 
	W0819 18:18:09.984832   63216 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 18:18:09.984873   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:18:10.439037   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:18:10.453739   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:18:10.463241   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:18:10.463262   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:18:10.463313   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:18:10.472407   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:18:10.472467   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:18:10.481297   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:18:10.489478   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:18:10.489542   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:18:10.498042   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.506373   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:18:10.506433   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.515158   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:18:10.523412   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:18:10.523483   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:18:10.532060   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:18:10.746836   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:18:16.109014   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:19.180970   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:25.261041   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:28.333057   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:34.412966   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:37.485036   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:43.565013   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:46.637059   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:52.716967   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:55.789060   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:01.869005   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:04.941027   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:11.020989   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:14.093067   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:20.173021   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:23.248974   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:29.324961   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:32.397037   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:38.477031   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:41.549001   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:47.629019   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:50.700996   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:56.781035   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:59.853000   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:06.430174   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:20:06.430256   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:20:06.431894   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:20:06.431968   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:20:06.432060   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:20:06.432203   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:20:06.432334   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:20:06.432440   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:20:06.434250   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:20:06.434349   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:20:06.434444   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:20:06.434563   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:20:06.434623   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:20:06.434717   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:20:06.434805   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:20:06.434894   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:20:06.434974   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:20:06.435052   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:20:06.435135   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:20:06.435204   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:20:06.435288   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:20:06.435365   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:20:06.435421   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:20:06.435474   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:20:06.435531   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:20:06.435689   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:20:06.435781   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:20:06.435827   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:20:06.435886   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:20:06.437538   63216 out.go:235]   - Booting up control plane ...
	I0819 18:20:06.437678   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:20:06.437771   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:20:06.437852   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:20:06.437928   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:20:06.438063   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:20:06.438105   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:20:06.438164   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438342   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438416   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438568   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438637   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438821   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438902   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439167   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439264   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439458   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439472   63216 kubeadm.go:310] 
	I0819 18:20:06.439514   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:20:06.439547   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:20:06.439553   63216 kubeadm.go:310] 
	I0819 18:20:06.439583   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:20:06.439626   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:20:06.439732   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:20:06.439749   63216 kubeadm.go:310] 
	I0819 18:20:06.439873   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:20:06.439915   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:20:06.439944   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:20:06.439952   63216 kubeadm.go:310] 
	I0819 18:20:06.440039   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:20:06.440106   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:20:06.440113   63216 kubeadm.go:310] 
	I0819 18:20:06.440252   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:20:06.440329   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:20:06.440392   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:20:06.440458   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:20:06.440521   63216 kubeadm.go:394] duration metric: took 8m2.012853316s to StartCluster
	I0819 18:20:06.440524   63216 kubeadm.go:310] 
	I0819 18:20:06.440559   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:20:06.440610   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:20:06.481255   63216 cri.go:89] found id: ""
	I0819 18:20:06.481285   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.481297   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:20:06.481305   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:20:06.481364   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:20:06.516769   63216 cri.go:89] found id: ""
	I0819 18:20:06.516801   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.516811   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:20:06.516818   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:20:06.516933   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:20:06.551964   63216 cri.go:89] found id: ""
	I0819 18:20:06.551998   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.552006   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:20:06.552014   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:20:06.552108   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:20:06.586084   63216 cri.go:89] found id: ""
	I0819 18:20:06.586115   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.586124   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:20:06.586131   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:20:06.586189   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:20:06.620732   63216 cri.go:89] found id: ""
	I0819 18:20:06.620773   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.620785   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:20:06.620792   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:20:06.620843   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:20:06.659731   63216 cri.go:89] found id: ""
	I0819 18:20:06.659762   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.659772   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:20:06.659780   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:20:06.659846   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:20:06.694223   63216 cri.go:89] found id: ""
	I0819 18:20:06.694257   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.694267   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:20:06.694275   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:20:06.694337   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:20:06.727474   63216 cri.go:89] found id: ""
	I0819 18:20:06.727508   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.727518   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:20:06.727528   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:20:06.727538   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:20:06.778006   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:20:06.778041   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:20:06.792059   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:20:06.792089   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:20:06.863596   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:20:06.863625   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:20:06.863637   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:20:06.979710   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:20:06.979752   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 18:20:07.030879   63216 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 18:20:07.030930   63216 out.go:270] * 
	W0819 18:20:07.031004   63216 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.031025   63216 out.go:270] * 
	W0819 18:20:07.031896   63216 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:20:07.035220   63216 out.go:201] 
	W0819 18:20:07.036384   63216 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.036435   63216 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 18:20:07.036466   63216 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 18:20:07.037783   63216 out.go:201] 
	
	
	==> CRI-O <==
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.027856752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091608027834938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee19d9a4-5a0d-4455-bd46-2ed32d5605d0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.028488504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f37ba98f-8d50-4c67-8458-6305d6035f3c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.028551730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f37ba98f-8d50-4c67-8458-6305d6035f3c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.028585516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f37ba98f-8d50-4c67-8458-6305d6035f3c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.063956212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ec4e0af-2d92-4b64-9680-6090a567296a name=/runtime.v1.RuntimeService/Version
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.064053358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ec4e0af-2d92-4b64-9680-6090a567296a name=/runtime.v1.RuntimeService/Version
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.065160085Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cc67594-960c-46ef-bde4-3892d6407709 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.065657738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091608065622703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cc67594-960c-46ef-bde4-3892d6407709 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.066206174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8f28641-c153-4ed9-aaee-7df562cbc0e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.066273732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8f28641-c153-4ed9-aaee-7df562cbc0e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.066306725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b8f28641-c153-4ed9-aaee-7df562cbc0e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.098706804Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cae146f6-d5da-4121-88c4-eb3501f3e365 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.098795424Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cae146f6-d5da-4121-88c4-eb3501f3e365 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.099990672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=faf8ac62-bc71-4faf-98de-bd6480cc66a0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.100375934Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091608100354634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=faf8ac62-bc71-4faf-98de-bd6480cc66a0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.100837716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8c5af28-c2ec-4e4a-b05b-5692cf3f11c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.100898642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8c5af28-c2ec-4e4a-b05b-5692cf3f11c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.100932590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e8c5af28-c2ec-4e4a-b05b-5692cf3f11c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.134850974Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dca5a383-f360-4466-9255-433e656a43ac name=/runtime.v1.RuntimeService/Version
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.134974746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dca5a383-f360-4466-9255-433e656a43ac name=/runtime.v1.RuntimeService/Version
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.136325677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e16309e-6caf-42a9-b37b-3d369ae1da96 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.136767534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091608136744375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e16309e-6caf-42a9-b37b-3d369ae1da96 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.137229914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ceff5bd5-a37a-434f-a340-2d1fcc1d1cc2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.137300646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ceff5bd5-a37a-434f-a340-2d1fcc1d1cc2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:20:08 old-k8s-version-079123 crio[645]: time="2024-08-19 18:20:08.137349466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ceff5bd5-a37a-434f-a340-2d1fcc1d1cc2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug19 18:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050661] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037961] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.796045] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.906924] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.551301] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.289032] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.062660] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073191] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.227214] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.148485] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.242620] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[Aug19 18:12] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.058214] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.166270] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	[ +11.850102] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 18:16] systemd-fstab-generator[5127]: Ignoring "noauto" option for root device
	[Aug19 18:18] systemd-fstab-generator[5400]: Ignoring "noauto" option for root device
	[  +0.061151] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:20:08 up 8 min,  0 users,  load average: 0.27, 0.12, 0.06
	Linux old-k8s-version-079123 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000c8a0e0, 0xc00070fc80, 0x1, 0x0, 0x0)
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0002476c0)
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]: goroutine 122 [select]:
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000ce6500, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00027af60, 0x0, 0x0)
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0002476c0)
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5579]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 19 18:20:06 old-k8s-version-079123 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 19 18:20:06 old-k8s-version-079123 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 19 18:20:06 old-k8s-version-079123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 19 18:20:06 old-k8s-version-079123 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 19 18:20:06 old-k8s-version-079123 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5638]: I0819 18:20:06.966657    5638 server.go:416] Version: v1.20.0
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5638]: I0819 18:20:06.967118    5638 server.go:837] Client rotation is on, will bootstrap in background
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5638]: I0819 18:20:06.970303    5638 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5638]: W0819 18:20:06.973028    5638 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 19 18:20:06 old-k8s-version-079123 kubelet[5638]: I0819 18:20:06.975061    5638 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079123 -n old-k8s-version-079123
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 2 (218.597625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-079123" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (703.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-306581 --alsologtostderr -v=3
E0819 18:15:21.262520   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-306581 --alsologtostderr -v=3: exit status 82 (2m0.507919402s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-306581"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:13:21.211684   65592 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:13:21.211902   65592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:13:21.211917   65592 out.go:358] Setting ErrFile to fd 2...
	I0819 18:13:21.212053   65592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:13:21.212372   65592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:13:21.212614   65592 out.go:352] Setting JSON to false
	I0819 18:13:21.212696   65592 mustload.go:65] Loading cluster: embed-certs-306581
	I0819 18:13:21.213214   65592 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:13:21.213342   65592 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/config.json ...
	I0819 18:13:21.213605   65592 mustload.go:65] Loading cluster: embed-certs-306581
	I0819 18:13:21.213717   65592 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:13:21.213764   65592 stop.go:39] StopHost: embed-certs-306581
	I0819 18:13:21.214170   65592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:13:21.214211   65592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:13:21.228717   65592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45325
	I0819 18:13:21.229300   65592 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:13:21.229971   65592 main.go:141] libmachine: Using API Version  1
	I0819 18:13:21.229995   65592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:13:21.230349   65592 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:13:21.232646   65592 out.go:177] * Stopping node "embed-certs-306581"  ...
	I0819 18:13:21.234081   65592 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0819 18:13:21.234121   65592 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:13:21.234337   65592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0819 18:13:21.234369   65592 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:13:21.237192   65592 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:13:21.237572   65592 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:12:30 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:13:21.237625   65592 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:13:21.237696   65592 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:13:21.237847   65592 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:13:21.238070   65592 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:13:21.238234   65592 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:13:21.339582   65592 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0819 18:13:21.404025   65592 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0819 18:13:21.464366   65592 main.go:141] libmachine: Stopping "embed-certs-306581"...
	I0819 18:13:21.464403   65592 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:13:21.466046   65592 main.go:141] libmachine: (embed-certs-306581) Calling .Stop
	I0819 18:13:21.469613   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 0/120
	I0819 18:13:22.470887   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 1/120
	I0819 18:13:23.472519   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 2/120
	I0819 18:13:24.474477   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 3/120
	I0819 18:13:25.475855   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 4/120
	I0819 18:13:26.477365   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 5/120
	I0819 18:13:27.479708   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 6/120
	I0819 18:13:28.480969   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 7/120
	I0819 18:13:29.482341   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 8/120
	I0819 18:13:30.483647   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 9/120
	I0819 18:13:31.485063   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 10/120
	I0819 18:13:32.487195   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 11/120
	I0819 18:13:33.488650   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 12/120
	I0819 18:13:34.489967   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 13/120
	I0819 18:13:35.491465   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 14/120
	I0819 18:13:36.493661   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 15/120
	I0819 18:13:37.495166   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 16/120
	I0819 18:13:38.496548   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 17/120
	I0819 18:13:39.497719   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 18/120
	I0819 18:13:40.499553   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 19/120
	I0819 18:13:41.502012   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 20/120
	I0819 18:13:42.503295   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 21/120
	I0819 18:13:43.504459   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 22/120
	I0819 18:13:44.505711   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 23/120
	I0819 18:13:45.507727   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 24/120
	I0819 18:13:46.509690   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 25/120
	I0819 18:13:47.511018   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 26/120
	I0819 18:13:48.512260   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 27/120
	I0819 18:13:49.513712   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 28/120
	I0819 18:13:50.515356   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 29/120
	I0819 18:13:51.517256   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 30/120
	I0819 18:13:52.519256   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 31/120
	I0819 18:13:53.520633   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 32/120
	I0819 18:13:54.522899   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 33/120
	I0819 18:13:55.524363   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 34/120
	I0819 18:13:56.526264   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 35/120
	I0819 18:13:57.527697   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 36/120
	I0819 18:13:58.529326   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 37/120
	I0819 18:13:59.530927   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 38/120
	I0819 18:14:00.532570   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 39/120
	I0819 18:14:01.534749   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 40/120
	I0819 18:14:02.536070   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 41/120
	I0819 18:14:03.537692   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 42/120
	I0819 18:14:04.539407   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 43/120
	I0819 18:14:05.540884   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 44/120
	I0819 18:14:06.542994   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 45/120
	I0819 18:14:07.544336   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 46/120
	I0819 18:14:08.546164   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 47/120
	I0819 18:14:09.547530   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 48/120
	I0819 18:14:10.549027   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 49/120
	I0819 18:14:11.551191   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 50/120
	I0819 18:14:12.552944   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 51/120
	I0819 18:14:13.555138   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 52/120
	I0819 18:14:14.556554   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 53/120
	I0819 18:14:15.558032   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 54/120
	I0819 18:14:16.560143   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 55/120
	I0819 18:14:17.561585   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 56/120
	I0819 18:14:18.563101   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 57/120
	I0819 18:14:19.565069   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 58/120
	I0819 18:14:20.567388   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 59/120
	I0819 18:14:21.569058   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 60/120
	I0819 18:14:22.571270   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 61/120
	I0819 18:14:23.572913   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 62/120
	I0819 18:14:24.575084   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 63/120
	I0819 18:14:25.576317   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 64/120
	I0819 18:14:26.577838   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 65/120
	I0819 18:14:27.579049   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 66/120
	I0819 18:14:28.580287   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 67/120
	I0819 18:14:29.581625   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 68/120
	I0819 18:14:30.583283   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 69/120
	I0819 18:14:31.585640   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 70/120
	I0819 18:14:32.587483   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 71/120
	I0819 18:14:33.588843   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 72/120
	I0819 18:14:34.590082   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 73/120
	I0819 18:14:35.591416   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 74/120
	I0819 18:14:36.593516   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 75/120
	I0819 18:14:37.595149   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 76/120
	I0819 18:14:38.596927   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 77/120
	I0819 18:14:39.599333   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 78/120
	I0819 18:14:40.600836   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 79/120
	I0819 18:14:41.603046   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 80/120
	I0819 18:14:42.604505   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 81/120
	I0819 18:14:43.605920   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 82/120
	I0819 18:14:44.607615   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 83/120
	I0819 18:14:45.609576   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 84/120
	I0819 18:14:46.611619   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 85/120
	I0819 18:14:47.613050   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 86/120
	I0819 18:14:48.614568   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 87/120
	I0819 18:14:49.615865   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 88/120
	I0819 18:14:50.617417   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 89/120
	I0819 18:14:51.619704   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 90/120
	I0819 18:14:52.622103   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 91/120
	I0819 18:14:53.623654   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 92/120
	I0819 18:14:54.625244   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 93/120
	I0819 18:14:55.626666   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 94/120
	I0819 18:14:56.628620   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 95/120
	I0819 18:14:57.630429   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 96/120
	I0819 18:14:58.631923   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 97/120
	I0819 18:14:59.633380   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 98/120
	I0819 18:15:00.635415   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 99/120
	I0819 18:15:01.637286   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 100/120
	I0819 18:15:02.639786   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 101/120
	I0819 18:15:03.641063   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 102/120
	I0819 18:15:04.643293   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 103/120
	I0819 18:15:05.644768   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 104/120
	I0819 18:15:06.646204   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 105/120
	I0819 18:15:07.647736   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 106/120
	I0819 18:15:08.648931   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 107/120
	I0819 18:15:09.651194   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 108/120
	I0819 18:15:10.652405   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 109/120
	I0819 18:15:11.654550   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 110/120
	I0819 18:15:12.655940   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 111/120
	I0819 18:15:13.657335   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 112/120
	I0819 18:15:14.659326   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 113/120
	I0819 18:15:15.660688   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 114/120
	I0819 18:15:16.662782   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 115/120
	I0819 18:15:17.664297   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 116/120
	I0819 18:15:18.665597   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 117/120
	I0819 18:15:19.667318   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 118/120
	I0819 18:15:20.669674   65592 main.go:141] libmachine: (embed-certs-306581) Waiting for machine to stop 119/120
	I0819 18:15:21.670158   65592 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0819 18:15:21.670216   65592 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0819 18:15:21.672122   65592 out.go:201] 
	W0819 18:15:21.673336   65592 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0819 18:15:21.673358   65592 out.go:270] * 
	* 
	W0819 18:15:21.676355   65592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:15:21.677537   65592 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-306581 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-306581 -n embed-certs-306581
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-306581 -n embed-certs-306581: exit status 3 (18.657436705s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:15:40.337015   66008 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.181:22: connect: no route to host
	E0819 18:15:40.337033   66008 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.181:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-306581" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-306581 -n embed-certs-306581
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-306581 -n embed-certs-306581: exit status 3 (3.164076648s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:15:43.501059   66102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.181:22: connect: no route to host
	E0819 18:15:43.501081   66102 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.181:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-306581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-306581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153705606s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.181:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-306581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-306581 -n embed-certs-306581
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-306581 -n embed-certs-306581: exit status 3 (3.062004509s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:15:52.717058   66183 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.181:22: connect: no route to host
	E0819 18:15:52.717078   66183 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.181:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-306581" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-19 18:25:17.569854574 +0000 UTC m=+5579.274022455
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-813424 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-813424 logs -n 25: (1.173850372s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-975771                              | cert-expiration-975771       | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:06 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-233969                  | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-233969                                   | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233045             | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079123        | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233045                  | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-813424       | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:16 UTC |
	|         | default-k8s-diff-port-813424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079123             | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-233045 image list                           | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-814719 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | disable-driver-mounts-814719                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-306581            | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC | 19 Aug 24 18:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-306581                 | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:15:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:15:52.756356   66229 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:15:52.756664   66229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:15:52.756675   66229 out.go:358] Setting ErrFile to fd 2...
	I0819 18:15:52.756680   66229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:15:52.756881   66229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:15:52.757409   66229 out.go:352] Setting JSON to false
	I0819 18:15:52.758366   66229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7098,"bootTime":1724084255,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:15:52.758430   66229 start.go:139] virtualization: kvm guest
	I0819 18:15:52.760977   66229 out.go:177] * [embed-certs-306581] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:15:52.762479   66229 notify.go:220] Checking for updates...
	I0819 18:15:52.762504   66229 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:15:52.763952   66229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:15:52.765453   66229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:15:52.766810   66229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:15:52.768135   66229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:15:52.769369   66229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:15:52.771017   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:52.771443   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.771504   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.786463   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0819 18:15:52.786925   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.787501   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.787523   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.787800   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.787975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.788239   66229 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:15:52.788527   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.788562   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.803703   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0819 18:15:52.804145   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.804609   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.804625   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.804962   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.805142   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.842707   66229 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:15:52.844070   66229 start.go:297] selected driver: kvm2
	I0819 18:15:52.844092   66229 start.go:901] validating driver "kvm2" against &{Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:52.844258   66229 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:15:52.844998   66229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:52.845085   66229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:15:52.860606   66229 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:15:52.861678   66229 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:15:52.861730   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:15:52.861742   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:15:52.861793   66229 start.go:340] cluster config:
	{Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:52.862003   66229 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:52.864173   66229 out.go:177] * Starting "embed-certs-306581" primary control-plane node in "embed-certs-306581" cluster
	I0819 18:15:52.865772   66229 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:15:52.865819   66229 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:15:52.865827   66229 cache.go:56] Caching tarball of preloaded images
	I0819 18:15:52.865902   66229 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:15:52.865913   66229 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:15:52.866012   66229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/config.json ...
	I0819 18:15:52.866250   66229 start.go:360] acquireMachinesLock for embed-certs-306581: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:15:52.866299   66229 start.go:364] duration metric: took 26.7µs to acquireMachinesLock for "embed-certs-306581"
	I0819 18:15:52.866311   66229 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:15:52.866316   66229 fix.go:54] fixHost starting: 
	I0819 18:15:52.866636   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.866671   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.883154   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0819 18:15:52.883648   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.884149   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.884170   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.884509   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.884710   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.884888   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:15:52.886632   66229 fix.go:112] recreateIfNeeded on embed-certs-306581: state=Running err=<nil>
	W0819 18:15:52.886653   66229 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:15:52.888856   66229 out.go:177] * Updating the running kvm2 "embed-certs-306581" VM ...
	I0819 18:15:50.375775   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:52.376597   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:50.455083   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:50.467702   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:50.467768   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:50.517276   63216 cri.go:89] found id: ""
	I0819 18:15:50.517306   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.517315   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:50.517323   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:50.517399   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:50.550878   63216 cri.go:89] found id: ""
	I0819 18:15:50.550905   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.550914   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:50.550921   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:50.550984   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:50.583515   63216 cri.go:89] found id: ""
	I0819 18:15:50.583543   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.583553   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:50.583560   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:50.583622   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:50.618265   63216 cri.go:89] found id: ""
	I0819 18:15:50.618291   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.618299   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:50.618304   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:50.618362   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:50.653436   63216 cri.go:89] found id: ""
	I0819 18:15:50.653461   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.653469   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:50.653476   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:50.653534   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:50.687715   63216 cri.go:89] found id: ""
	I0819 18:15:50.687745   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.687757   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:50.687764   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:50.687885   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:50.721235   63216 cri.go:89] found id: ""
	I0819 18:15:50.721262   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.721272   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:50.721280   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:50.721328   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:50.754095   63216 cri.go:89] found id: ""
	I0819 18:15:50.754126   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.754134   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:50.754143   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:50.754156   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:50.805661   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:50.805698   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:50.819495   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:50.819536   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:50.887296   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:50.887317   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:50.887334   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:50.966224   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:50.966261   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.508007   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:53.520812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:53.520870   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:53.552790   63216 cri.go:89] found id: ""
	I0819 18:15:53.552816   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.552823   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:53.552829   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:53.552873   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:53.585937   63216 cri.go:89] found id: ""
	I0819 18:15:53.585969   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.585978   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:53.585986   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:53.586057   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:53.618890   63216 cri.go:89] found id: ""
	I0819 18:15:53.618915   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.618922   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:53.618928   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:53.618975   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:53.650045   63216 cri.go:89] found id: ""
	I0819 18:15:53.650069   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.650076   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:53.650082   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:53.650138   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:53.685069   63216 cri.go:89] found id: ""
	I0819 18:15:53.685097   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.685106   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:53.685113   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:53.685179   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:53.717742   63216 cri.go:89] found id: ""
	I0819 18:15:53.717771   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.717778   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:53.717784   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:53.717832   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:53.747768   63216 cri.go:89] found id: ""
	I0819 18:15:53.747798   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.747806   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:53.747812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:53.747858   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:53.779973   63216 cri.go:89] found id: ""
	I0819 18:15:53.779999   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.780006   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:53.780016   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:53.780027   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.815619   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:53.815656   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:53.866767   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:53.866802   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:53.879693   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:53.879721   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:53.947610   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:53.947640   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:53.947659   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:52.172237   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:54.172434   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:52.890101   66229 machine.go:93] provisionDockerMachine start ...
	I0819 18:15:52.890131   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.890374   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:15:52.892900   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:15:52.893405   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:12:30 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:15:52.893431   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:15:52.893613   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:15:52.893796   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:15:52.893979   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:15:52.894149   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:15:52.894328   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:52.894580   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:15:52.894597   66229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:15:55.789130   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:15:54.376799   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:56.884787   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:56.524639   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:56.537312   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:56.537395   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:56.569913   63216 cri.go:89] found id: ""
	I0819 18:15:56.569958   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.569965   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:56.569972   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:56.570031   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:56.602119   63216 cri.go:89] found id: ""
	I0819 18:15:56.602145   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.602152   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:56.602158   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:56.602211   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:56.634864   63216 cri.go:89] found id: ""
	I0819 18:15:56.634900   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.634910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:56.634920   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:56.634982   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:56.667099   63216 cri.go:89] found id: ""
	I0819 18:15:56.667127   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.667136   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:56.667145   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:56.667194   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:56.703539   63216 cri.go:89] found id: ""
	I0819 18:15:56.703562   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.703571   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:56.703576   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:56.703637   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.734668   63216 cri.go:89] found id: ""
	I0819 18:15:56.734691   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.734698   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:56.734703   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:56.734747   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:56.768840   63216 cri.go:89] found id: ""
	I0819 18:15:56.768866   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.768874   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:56.768880   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:56.768925   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:56.800337   63216 cri.go:89] found id: ""
	I0819 18:15:56.800366   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.800375   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:56.800384   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:56.800398   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:56.866036   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:56.866060   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:56.866072   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:56.955372   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:56.955414   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:57.004450   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:57.004477   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:57.057284   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:57.057320   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.570450   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:59.583640   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:59.583729   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:59.617911   63216 cri.go:89] found id: ""
	I0819 18:15:59.617943   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.617954   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:59.617963   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:59.618014   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:59.650239   63216 cri.go:89] found id: ""
	I0819 18:15:59.650265   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.650274   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:59.650279   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:59.650329   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:59.684877   63216 cri.go:89] found id: ""
	I0819 18:15:59.684902   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.684910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:59.684916   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:59.684977   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:59.717378   63216 cri.go:89] found id: ""
	I0819 18:15:59.717402   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.717414   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:59.717428   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:59.717484   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:59.748937   63216 cri.go:89] found id: ""
	I0819 18:15:59.748968   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.748980   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:59.748989   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:59.749058   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.672222   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:59.171375   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:58.861002   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:15:59.375951   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:01.376193   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:03.376512   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:59.781784   63216 cri.go:89] found id: ""
	I0819 18:15:59.781819   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.781830   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:59.781837   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:59.781899   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:59.815593   63216 cri.go:89] found id: ""
	I0819 18:15:59.815626   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.815637   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:59.815645   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:59.815709   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:59.847540   63216 cri.go:89] found id: ""
	I0819 18:15:59.847571   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.847581   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:59.847595   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:59.847609   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.860256   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:59.860292   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:59.931873   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:59.931900   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:59.931915   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:00.011897   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:00.011938   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:00.047600   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:00.047628   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.599457   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:02.617040   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:02.617112   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:02.658148   63216 cri.go:89] found id: ""
	I0819 18:16:02.658173   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.658181   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:02.658187   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:02.658256   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:02.711833   63216 cri.go:89] found id: ""
	I0819 18:16:02.711873   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.711882   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:02.711889   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:02.711945   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:02.746611   63216 cri.go:89] found id: ""
	I0819 18:16:02.746644   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.746652   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:02.746658   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:02.746712   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:02.781731   63216 cri.go:89] found id: ""
	I0819 18:16:02.781757   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.781764   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:02.781771   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:02.781827   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:02.814215   63216 cri.go:89] found id: ""
	I0819 18:16:02.814242   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.814253   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:02.814260   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:02.814320   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:02.848767   63216 cri.go:89] found id: ""
	I0819 18:16:02.848804   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.848815   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:02.848823   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:02.848881   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:02.882890   63216 cri.go:89] found id: ""
	I0819 18:16:02.882913   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.882920   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:02.882927   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:02.882983   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:02.918333   63216 cri.go:89] found id: ""
	I0819 18:16:02.918362   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.918370   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:02.918393   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:02.918405   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.966994   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:02.967024   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:02.980377   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:02.980437   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:03.045097   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:03.045127   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:03.045145   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:03.126682   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:03.126727   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:01.671492   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:04.171471   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:04.941029   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:05.376677   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:05.376705   62749 pod_ready.go:82] duration metric: took 4m0.006404877s for pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:05.376714   62749 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 18:16:05.376720   62749 pod_ready.go:39] duration metric: took 4m6.335802515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:05.376735   62749 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:16:05.376775   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.376822   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.419678   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:05.419719   62749 cri.go:89] found id: ""
	I0819 18:16:05.419728   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:05.419801   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.424210   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.424271   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.459501   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:05.459527   62749 cri.go:89] found id: ""
	I0819 18:16:05.459535   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:05.459578   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.463654   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.463711   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.497591   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:05.497613   62749 cri.go:89] found id: ""
	I0819 18:16:05.497620   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:05.497667   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.501207   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.501274   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.535112   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:05.535141   62749 cri.go:89] found id: ""
	I0819 18:16:05.535150   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:05.535215   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.538855   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.538909   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.573744   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:05.573769   62749 cri.go:89] found id: ""
	I0819 18:16:05.573776   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:05.573824   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.577981   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.578045   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.616545   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:05.616569   62749 cri.go:89] found id: ""
	I0819 18:16:05.616577   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:05.616630   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.620549   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.620597   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.662743   62749 cri.go:89] found id: ""
	I0819 18:16:05.662781   62749 logs.go:276] 0 containers: []
	W0819 18:16:05.662792   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.662800   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:05.662855   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:05.711433   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:05.711456   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:05.711463   62749 cri.go:89] found id: ""
	I0819 18:16:05.711472   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:05.711536   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.716476   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.720240   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:05.720261   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.261474   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:06.261523   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:06.384895   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:06.384927   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:06.421665   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:06.421700   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:06.461866   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:06.461900   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:06.496543   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:06.496570   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:06.551478   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:06.551518   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:06.586858   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.586886   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.625272   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.625300   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:06.697922   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:06.697960   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:06.711624   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:06.711658   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:06.752648   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:06.752677   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:06.796805   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:06.796836   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:05.662843   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:05.680724   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.680811   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.719205   63216 cri.go:89] found id: ""
	I0819 18:16:05.719227   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.719234   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:05.719240   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.719283   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.764548   63216 cri.go:89] found id: ""
	I0819 18:16:05.764577   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.764587   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:05.764593   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.764644   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.800478   63216 cri.go:89] found id: ""
	I0819 18:16:05.800503   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.800521   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:05.800527   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.800582   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.837403   63216 cri.go:89] found id: ""
	I0819 18:16:05.837432   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.837443   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:05.837450   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.837506   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.869330   63216 cri.go:89] found id: ""
	I0819 18:16:05.869357   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.869367   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:05.869375   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.869463   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.900354   63216 cri.go:89] found id: ""
	I0819 18:16:05.900382   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.900393   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:05.900401   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.900457   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.933899   63216 cri.go:89] found id: ""
	I0819 18:16:05.933926   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.933937   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.933944   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:05.934003   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:05.968393   63216 cri.go:89] found id: ""
	I0819 18:16:05.968421   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.968430   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:05.968441   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:05.968458   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:05.980957   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:05.980988   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:06.045310   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:06.045359   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:06.045375   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.124351   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.124389   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.168102   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.168130   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:08.718499   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:08.731535   63216 kubeadm.go:597] duration metric: took 4m4.252819836s to restartPrimaryControlPlane
	W0819 18:16:08.731622   63216 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:08.731651   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:06.172578   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:08.671110   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:08.013019   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:09.338729   62749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:09.355014   62749 api_server.go:72] duration metric: took 4m18.036977131s to wait for apiserver process to appear ...
	I0819 18:16:09.355046   62749 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:16:09.355086   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:09.355148   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:09.390088   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:09.390107   62749 cri.go:89] found id: ""
	I0819 18:16:09.390115   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:09.390161   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.393972   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:09.394024   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:09.426919   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:09.426943   62749 cri.go:89] found id: ""
	I0819 18:16:09.426953   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:09.427007   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.430685   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:09.430755   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:09.465843   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:09.465867   62749 cri.go:89] found id: ""
	I0819 18:16:09.465876   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:09.465936   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.469990   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:09.470057   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:09.503690   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:09.503716   62749 cri.go:89] found id: ""
	I0819 18:16:09.503727   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:09.503789   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.507731   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:09.507791   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:09.541067   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:09.541098   62749 cri.go:89] found id: ""
	I0819 18:16:09.541108   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:09.541169   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.546503   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:09.546568   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:09.587861   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:09.587888   62749 cri.go:89] found id: ""
	I0819 18:16:09.587898   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:09.587960   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.593765   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:09.593831   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:09.628426   62749 cri.go:89] found id: ""
	I0819 18:16:09.628456   62749 logs.go:276] 0 containers: []
	W0819 18:16:09.628464   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:09.628470   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:09.628529   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:09.666596   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:09.666622   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:09.666628   62749 cri.go:89] found id: ""
	I0819 18:16:09.666636   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:09.666688   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.670929   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.674840   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:09.674863   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:09.708286   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:09.708313   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:09.739212   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:09.739234   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:10.171487   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:10.171535   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:10.208985   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:10.209025   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:10.222001   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:10.222028   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:10.267193   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:10.267225   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:10.300082   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:10.300110   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:10.333403   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:10.333434   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:10.371961   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:10.371989   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:10.425550   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:10.425586   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:10.500742   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:10.500796   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:10.602484   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:10.602518   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:13.149769   62749 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8444/healthz ...
	I0819 18:16:13.154238   62749 api_server.go:279] https://192.168.61.243:8444/healthz returned 200:
	ok
	I0819 18:16:13.155139   62749 api_server.go:141] control plane version: v1.31.0
	I0819 18:16:13.155154   62749 api_server.go:131] duration metric: took 3.800101993s to wait for apiserver health ...
	I0819 18:16:13.155161   62749 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:16:13.155180   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:13.155232   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:13.194723   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:13.194749   62749 cri.go:89] found id: ""
	I0819 18:16:13.194759   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:13.194811   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.198645   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:13.198703   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:13.236332   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:13.236405   62749 cri.go:89] found id: ""
	I0819 18:16:13.236418   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:13.236473   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.240682   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:13.240764   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:13.277257   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:13.277283   62749 cri.go:89] found id: ""
	I0819 18:16:13.277290   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:13.277339   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.281458   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:13.281516   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:13.319419   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:13.319444   62749 cri.go:89] found id: ""
	I0819 18:16:13.319453   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:13.319508   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.323377   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:13.323444   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:13.357320   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:13.357344   62749 cri.go:89] found id: ""
	I0819 18:16:13.357353   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:13.357417   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.361505   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:13.361582   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:13.396379   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:13.396396   62749 cri.go:89] found id: ""
	I0819 18:16:13.396403   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:13.396457   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.400372   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:13.400442   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:13.433520   62749 cri.go:89] found id: ""
	I0819 18:16:13.433551   62749 logs.go:276] 0 containers: []
	W0819 18:16:13.433561   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:13.433569   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:13.433629   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:13.467382   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:13.467411   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:13.467418   62749 cri.go:89] found id: ""
	I0819 18:16:13.467427   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:13.467486   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.471371   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.474905   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:13.474924   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:13.547564   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:13.547596   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:13.593702   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:13.593731   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:13.629610   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:13.629634   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:13.669337   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:13.669372   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:13.729986   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:13.730012   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:13.766424   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:13.766459   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:13.806677   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:13.806702   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:13.540438   63216 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.808760826s)
	I0819 18:16:13.540508   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:13.555141   63216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:16:13.565159   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:16:13.575671   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:16:13.575689   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:16:13.575743   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:16:13.586181   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:16:13.586388   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:16:13.597239   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:16:13.606788   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:16:13.606857   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:16:13.616964   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.627128   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:16:13.627195   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.637263   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:16:13.646834   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:16:13.646898   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:16:13.657566   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:16:13.887585   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:16:11.171886   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:13.672521   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:14.199046   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:14.199103   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:14.213508   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:14.213537   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:14.341980   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:14.342017   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:14.389817   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:14.389853   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:14.425890   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:14.425928   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:16.991182   62749 system_pods.go:59] 8 kube-system pods found
	I0819 18:16:16.991211   62749 system_pods.go:61] "coredns-6f6b679f8f-4jvnz" [d81201db-1102-436b-ac29-dd201584de2d] Running
	I0819 18:16:16.991217   62749 system_pods.go:61] "etcd-default-k8s-diff-port-813424" [c9b60af5-479e-40b8-af56-2b1daef22843] Running
	I0819 18:16:16.991221   62749 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-813424" [b2f825b3-375c-46fb-9714-b0ed0be6ea51] Running
	I0819 18:16:16.991225   62749 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-813424" [c691a1e5-c01d-4bb1-8e37-c542524f9544] Running
	I0819 18:16:16.991229   62749 system_pods.go:61] "kube-proxy-j4x48" [886f5fe5-070e-419c-a9bb-5b95f7496717] Running
	I0819 18:16:16.991232   62749 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-813424" [b7bac76d-1783-40f1-adb2-75174ec8486e] Running
	I0819 18:16:16.991239   62749 system_pods.go:61] "metrics-server-6867b74b74-tp742" [aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:16:16.991243   62749 system_pods.go:61] "storage-provisioner" [658a37e1-39b6-4fa9-8f23-71518ebda8dc] Running
	I0819 18:16:16.991250   62749 system_pods.go:74] duration metric: took 3.836084784s to wait for pod list to return data ...
	I0819 18:16:16.991257   62749 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:16:16.993181   62749 default_sa.go:45] found service account: "default"
	I0819 18:16:16.993201   62749 default_sa.go:55] duration metric: took 1.93729ms for default service account to be created ...
	I0819 18:16:16.993208   62749 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:16:16.997803   62749 system_pods.go:86] 8 kube-system pods found
	I0819 18:16:16.997825   62749 system_pods.go:89] "coredns-6f6b679f8f-4jvnz" [d81201db-1102-436b-ac29-dd201584de2d] Running
	I0819 18:16:16.997830   62749 system_pods.go:89] "etcd-default-k8s-diff-port-813424" [c9b60af5-479e-40b8-af56-2b1daef22843] Running
	I0819 18:16:16.997835   62749 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-813424" [b2f825b3-375c-46fb-9714-b0ed0be6ea51] Running
	I0819 18:16:16.997840   62749 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-813424" [c691a1e5-c01d-4bb1-8e37-c542524f9544] Running
	I0819 18:16:16.997844   62749 system_pods.go:89] "kube-proxy-j4x48" [886f5fe5-070e-419c-a9bb-5b95f7496717] Running
	I0819 18:16:16.997848   62749 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-813424" [b7bac76d-1783-40f1-adb2-75174ec8486e] Running
	I0819 18:16:16.997854   62749 system_pods.go:89] "metrics-server-6867b74b74-tp742" [aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:16:16.997861   62749 system_pods.go:89] "storage-provisioner" [658a37e1-39b6-4fa9-8f23-71518ebda8dc] Running
	I0819 18:16:16.997868   62749 system_pods.go:126] duration metric: took 4.655661ms to wait for k8s-apps to be running ...
	I0819 18:16:16.997877   62749 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:16:16.997917   62749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:17.013524   62749 system_svc.go:56] duration metric: took 15.634104ms WaitForService to wait for kubelet
	I0819 18:16:17.013559   62749 kubeadm.go:582] duration metric: took 4m25.695525816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:16:17.013585   62749 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:16:17.016278   62749 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:16:17.016301   62749 node_conditions.go:123] node cpu capacity is 2
	I0819 18:16:17.016315   62749 node_conditions.go:105] duration metric: took 2.723578ms to run NodePressure ...
	I0819 18:16:17.016326   62749 start.go:241] waiting for startup goroutines ...
	I0819 18:16:17.016336   62749 start.go:246] waiting for cluster config update ...
	I0819 18:16:17.016351   62749 start.go:255] writing updated cluster config ...
	I0819 18:16:17.016817   62749 ssh_runner.go:195] Run: rm -f paused
	I0819 18:16:17.063056   62749 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:16:17.065819   62749 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-813424" cluster and "default" namespace by default
	I0819 18:16:14.093007   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:17.164989   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:16.172074   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:18.670402   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:20.671024   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:22.671462   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:26.288975   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:25.175354   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:27.671452   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:29.671496   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:29.357082   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:31.671726   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:33.672458   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:35.437060   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:36.171920   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:38.172318   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:38.513064   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:40.670687   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:42.670858   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.671276   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.589000   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:47.660996   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:47.171302   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:49.171707   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:51.675414   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:53.665939   62137 pod_ready.go:82] duration metric: took 4m0.001066956s for pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:53.665969   62137 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 18:16:53.665994   62137 pod_ready.go:39] duration metric: took 4m12.464901403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:53.666051   62137 kubeadm.go:597] duration metric: took 4m20.502224967s to restartPrimaryControlPlane
	W0819 18:16:53.666114   62137 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:53.666143   62137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:53.740978   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:56.817027   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:02.892936   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:05.965053   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:12.048961   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:15.116969   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:19.922253   62137 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.256081543s)
	I0819 18:17:19.922334   62137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:19.937012   62137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:17:19.946269   62137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:17:19.955344   62137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:17:19.955363   62137 kubeadm.go:157] found existing configuration files:
	
	I0819 18:17:19.955405   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:17:19.963979   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:17:19.964039   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:17:19.972679   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:17:19.980890   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:17:19.980947   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:17:19.989705   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:17:19.998606   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:17:19.998664   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:17:20.007553   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:17:20.016136   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:17:20.016185   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:17:20.024827   62137 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:17:20.073205   62137 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:17:20.073284   62137 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:17:20.186906   62137 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:17:20.187034   62137 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:17:20.187125   62137 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:17:20.198750   62137 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:17:20.200704   62137 out.go:235]   - Generating certificates and keys ...
	I0819 18:17:20.200810   62137 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:17:20.200905   62137 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:17:20.201015   62137 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:17:20.201099   62137 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:17:20.201202   62137 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:17:20.201279   62137 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:17:20.201370   62137 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:17:20.201468   62137 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:17:20.201578   62137 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:17:20.201686   62137 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:17:20.201743   62137 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:17:20.201823   62137 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:17:20.386866   62137 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:17:20.483991   62137 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:17:20.575440   62137 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:17:20.704349   62137 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:17:20.834890   62137 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:17:20.835583   62137 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:17:20.839290   62137 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:17:21.197002   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:20.841232   62137 out.go:235]   - Booting up control plane ...
	I0819 18:17:20.841313   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:17:20.841374   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:17:20.841428   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:17:20.858185   62137 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:17:20.866369   62137 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:17:20.866447   62137 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:17:20.997302   62137 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:17:20.997435   62137 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:17:21.499506   62137 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041994ms
	I0819 18:17:21.499625   62137 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:17:26.501489   62137 kubeadm.go:310] [api-check] The API server is healthy after 5.002014094s
	I0819 18:17:26.514398   62137 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:17:26.534278   62137 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:17:26.557460   62137 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:17:26.557706   62137 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-233969 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:17:26.569142   62137 kubeadm.go:310] [bootstrap-token] Using token: 2skh80.c6u95wnw3x4gmagv
	I0819 18:17:24.273082   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:26.570814   62137 out.go:235]   - Configuring RBAC rules ...
	I0819 18:17:26.570940   62137 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:17:26.583073   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:17:26.592407   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:17:26.595488   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:17:26.599062   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:17:26.603754   62137 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:17:26.908245   62137 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:17:27.340277   62137 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:17:27.909394   62137 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:17:27.912696   62137 kubeadm.go:310] 
	I0819 18:17:27.912811   62137 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:17:27.912834   62137 kubeadm.go:310] 
	I0819 18:17:27.912953   62137 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:17:27.912965   62137 kubeadm.go:310] 
	I0819 18:17:27.912996   62137 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:17:27.913086   62137 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:17:27.913166   62137 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:17:27.913178   62137 kubeadm.go:310] 
	I0819 18:17:27.913246   62137 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:17:27.913266   62137 kubeadm.go:310] 
	I0819 18:17:27.913338   62137 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:17:27.913349   62137 kubeadm.go:310] 
	I0819 18:17:27.913422   62137 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:17:27.913527   62137 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:17:27.913613   62137 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:17:27.913622   62137 kubeadm.go:310] 
	I0819 18:17:27.913727   62137 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:17:27.913827   62137 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:17:27.913842   62137 kubeadm.go:310] 
	I0819 18:17:27.913934   62137 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2skh80.c6u95wnw3x4gmagv \
	I0819 18:17:27.914073   62137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:17:27.914112   62137 kubeadm.go:310] 	--control-plane 
	I0819 18:17:27.914121   62137 kubeadm.go:310] 
	I0819 18:17:27.914223   62137 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:17:27.914235   62137 kubeadm.go:310] 
	I0819 18:17:27.914353   62137 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2skh80.c6u95wnw3x4gmagv \
	I0819 18:17:27.914499   62137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:17:27.916002   62137 kubeadm.go:310] W0819 18:17:20.045306    3048 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:17:27.916280   62137 kubeadm.go:310] W0819 18:17:20.046268    3048 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:17:27.916390   62137 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:17:27.916417   62137 cni.go:84] Creating CNI manager for ""
	I0819 18:17:27.916426   62137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:17:27.918384   62137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:17:27.919646   62137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:17:27.930298   62137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:17:27.946332   62137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:17:27.946440   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:27.946462   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-233969 minikube.k8s.io/updated_at=2024_08_19T18_17_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=no-preload-233969 minikube.k8s.io/primary=true
	I0819 18:17:27.972836   62137 ops.go:34] apiserver oom_adj: -16
	I0819 18:17:28.134899   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:28.635909   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:29.135326   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:29.635339   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:30.135992   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:30.635626   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:31.135493   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:31.635632   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:32.135812   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:32.208229   62137 kubeadm.go:1113] duration metric: took 4.261865811s to wait for elevateKubeSystemPrivileges
	I0819 18:17:32.208254   62137 kubeadm.go:394] duration metric: took 4m59.094587246s to StartCluster
	I0819 18:17:32.208270   62137 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:17:32.208350   62137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:17:32.210604   62137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:17:32.210888   62137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:17:32.210967   62137 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:17:32.211052   62137 addons.go:69] Setting storage-provisioner=true in profile "no-preload-233969"
	I0819 18:17:32.211070   62137 addons.go:69] Setting default-storageclass=true in profile "no-preload-233969"
	I0819 18:17:32.211088   62137 addons.go:234] Setting addon storage-provisioner=true in "no-preload-233969"
	I0819 18:17:32.211084   62137 addons.go:69] Setting metrics-server=true in profile "no-preload-233969"
	W0819 18:17:32.211096   62137 addons.go:243] addon storage-provisioner should already be in state true
	I0819 18:17:32.211102   62137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-233969"
	I0819 18:17:32.211125   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.211126   62137 addons.go:234] Setting addon metrics-server=true in "no-preload-233969"
	W0819 18:17:32.211166   62137 addons.go:243] addon metrics-server should already be in state true
	I0819 18:17:32.211198   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.211124   62137 config.go:182] Loaded profile config "no-preload-233969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:17:32.211475   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211505   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.211589   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211601   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211619   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.211623   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.212714   62137 out.go:177] * Verifying Kubernetes components...
	I0819 18:17:32.214075   62137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:17:32.227207   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0819 18:17:32.227219   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0819 18:17:32.227615   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.227709   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.228122   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.228142   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.228216   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.228236   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.228543   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.228610   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.229074   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.229112   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.229120   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.229147   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.230316   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0819 18:17:32.230746   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.231408   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.231437   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.231812   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.232018   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.235965   62137 addons.go:234] Setting addon default-storageclass=true in "no-preload-233969"
	W0819 18:17:32.235986   62137 addons.go:243] addon default-storageclass should already be in state true
	I0819 18:17:32.236013   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.236365   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.236392   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.244668   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0819 18:17:32.245056   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.245506   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.245534   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.245816   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0819 18:17:32.245848   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.245989   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.246239   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.246795   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.246811   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.247182   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.247380   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.248517   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.249498   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.250817   62137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 18:17:32.251649   62137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:17:30.348988   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:32.252466   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 18:17:32.252483   62137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 18:17:32.252501   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.253309   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0819 18:17:32.253687   62137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:17:32.253701   62137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:17:32.253717   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.253828   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.254340   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.254352   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.254706   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.255288   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.255324   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.256274   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.256776   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.256796   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.256970   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.257109   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.257229   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.257348   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.257756   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.258132   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.258144   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.258384   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.258531   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.258663   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.258788   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.271706   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0819 18:17:32.272115   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.272558   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.272575   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.272875   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.273041   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.274711   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.274914   62137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:17:32.274924   62137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:17:32.274936   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.277689   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.278191   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.278246   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.278358   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.278533   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.278701   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.278847   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.423546   62137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:17:32.445680   62137 node_ready.go:35] waiting up to 6m0s for node "no-preload-233969" to be "Ready" ...
	I0819 18:17:32.471999   62137 node_ready.go:49] node "no-preload-233969" has status "Ready":"True"
	I0819 18:17:32.472028   62137 node_ready.go:38] duration metric: took 26.307315ms for node "no-preload-233969" to be "Ready" ...
	I0819 18:17:32.472041   62137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:32.478401   62137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:32.518483   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:17:32.568928   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 18:17:32.568953   62137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 18:17:32.592301   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:17:32.645484   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 18:17:32.645513   62137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 18:17:32.715522   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:17:32.715552   62137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 18:17:32.781693   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:17:33.756997   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.238477445s)
	I0819 18:17:33.757035   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757044   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757051   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.164710772s)
	I0819 18:17:33.757088   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757101   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757454   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757450   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757466   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757475   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757483   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757490   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757538   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757564   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757616   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757640   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757712   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757729   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757733   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757852   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757915   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757937   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.831562   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.831588   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.831891   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.831907   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928005   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146269845s)
	I0819 18:17:33.928064   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.928082   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.928391   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.928438   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.928452   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928465   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.928477   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.928809   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.928820   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.928835   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928851   62137 addons.go:475] Verifying addon metrics-server=true in "no-preload-233969"
	I0819 18:17:33.930974   62137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 18:17:33.932101   62137 addons.go:510] duration metric: took 1.72114773s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 18:17:34.486566   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:33.421045   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:36.984891   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:39.484617   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:39.500962   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:42.572983   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:41.990189   62137 pod_ready.go:93] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:41.990210   62137 pod_ready.go:82] duration metric: took 9.511780534s for pod "etcd-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.990221   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.997282   62137 pod_ready.go:93] pod "kube-apiserver-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:41.997301   62137 pod_ready.go:82] duration metric: took 7.074393ms for pod "kube-apiserver-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.997310   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.008757   62137 pod_ready.go:93] pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.008775   62137 pod_ready.go:82] duration metric: took 11.458424ms for pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.008785   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt5nj" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.017802   62137 pod_ready.go:93] pod "kube-proxy-pt5nj" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.017820   62137 pod_ready.go:82] duration metric: took 9.029628ms for pod "kube-proxy-pt5nj" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.017828   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.025402   62137 pod_ready.go:93] pod "kube-scheduler-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.025424   62137 pod_ready.go:82] duration metric: took 7.589229ms for pod "kube-scheduler-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.025433   62137 pod_ready.go:39] duration metric: took 9.553379252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:42.025451   62137 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:17:42.025508   62137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:17:42.043190   62137 api_server.go:72] duration metric: took 9.832267712s to wait for apiserver process to appear ...
	I0819 18:17:42.043214   62137 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:17:42.043231   62137 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I0819 18:17:42.051124   62137 api_server.go:279] https://192.168.50.8:8443/healthz returned 200:
	ok
	I0819 18:17:42.052367   62137 api_server.go:141] control plane version: v1.31.0
	I0819 18:17:42.052392   62137 api_server.go:131] duration metric: took 9.170652ms to wait for apiserver health ...
	I0819 18:17:42.052404   62137 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:17:42.187227   62137 system_pods.go:59] 9 kube-system pods found
	I0819 18:17:42.187254   62137 system_pods.go:61] "coredns-6f6b679f8f-kdrzp" [0db6b602-ca09-40f4-9492-93cbbf919aa7] Running
	I0819 18:17:42.187259   62137 system_pods.go:61] "coredns-6f6b679f8f-vb6dx" [0d39fac8-0d53-4380-a903-080414848e24] Running
	I0819 18:17:42.187263   62137 system_pods.go:61] "etcd-no-preload-233969" [4974eb7f-625a-43fb-b0c4-09cf5b4c0829] Running
	I0819 18:17:42.187267   62137 system_pods.go:61] "kube-apiserver-no-preload-233969" [eb582488-97d8-494c-95ba-6c9e15ff433b] Running
	I0819 18:17:42.187270   62137 system_pods.go:61] "kube-controller-manager-no-preload-233969" [cf978fab-7333-45bb-adc8-127b905707a7] Running
	I0819 18:17:42.187273   62137 system_pods.go:61] "kube-proxy-pt5nj" [dfd68c04-8a56-4a98-a1fb-21a194ebb5e3] Running
	I0819 18:17:42.187277   62137 system_pods.go:61] "kube-scheduler-no-preload-233969" [d08ef733-88fa-43da-92ac-aee974a6c2d2] Running
	I0819 18:17:42.187282   62137 system_pods.go:61] "metrics-server-6867b74b74-bfkkf" [00206622-fe4f-4f26-8f69-ac7fb6a39805] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:17:42.187285   62137 system_pods.go:61] "storage-provisioner" [67a50087-cb20-407b-9a87-03d04d230afb] Running
	I0819 18:17:42.187292   62137 system_pods.go:74] duration metric: took 134.882111ms to wait for pod list to return data ...
	I0819 18:17:42.187299   62137 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:17:42.382612   62137 default_sa.go:45] found service account: "default"
	I0819 18:17:42.382643   62137 default_sa.go:55] duration metric: took 195.337173ms for default service account to be created ...
	I0819 18:17:42.382652   62137 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:17:42.585988   62137 system_pods.go:86] 9 kube-system pods found
	I0819 18:17:42.586024   62137 system_pods.go:89] "coredns-6f6b679f8f-kdrzp" [0db6b602-ca09-40f4-9492-93cbbf919aa7] Running
	I0819 18:17:42.586032   62137 system_pods.go:89] "coredns-6f6b679f8f-vb6dx" [0d39fac8-0d53-4380-a903-080414848e24] Running
	I0819 18:17:42.586038   62137 system_pods.go:89] "etcd-no-preload-233969" [4974eb7f-625a-43fb-b0c4-09cf5b4c0829] Running
	I0819 18:17:42.586044   62137 system_pods.go:89] "kube-apiserver-no-preload-233969" [eb582488-97d8-494c-95ba-6c9e15ff433b] Running
	I0819 18:17:42.586049   62137 system_pods.go:89] "kube-controller-manager-no-preload-233969" [cf978fab-7333-45bb-adc8-127b905707a7] Running
	I0819 18:17:42.586056   62137 system_pods.go:89] "kube-proxy-pt5nj" [dfd68c04-8a56-4a98-a1fb-21a194ebb5e3] Running
	I0819 18:17:42.586062   62137 system_pods.go:89] "kube-scheduler-no-preload-233969" [d08ef733-88fa-43da-92ac-aee974a6c2d2] Running
	I0819 18:17:42.586072   62137 system_pods.go:89] "metrics-server-6867b74b74-bfkkf" [00206622-fe4f-4f26-8f69-ac7fb6a39805] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:17:42.586078   62137 system_pods.go:89] "storage-provisioner" [67a50087-cb20-407b-9a87-03d04d230afb] Running
	I0819 18:17:42.586089   62137 system_pods.go:126] duration metric: took 203.431371ms to wait for k8s-apps to be running ...
	I0819 18:17:42.586101   62137 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:17:42.586154   62137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:42.601268   62137 system_svc.go:56] duration metric: took 15.156104ms WaitForService to wait for kubelet
	I0819 18:17:42.601305   62137 kubeadm.go:582] duration metric: took 10.39038433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:17:42.601330   62137 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:17:42.783030   62137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:17:42.783058   62137 node_conditions.go:123] node cpu capacity is 2
	I0819 18:17:42.783069   62137 node_conditions.go:105] duration metric: took 181.734608ms to run NodePressure ...
	I0819 18:17:42.783080   62137 start.go:241] waiting for startup goroutines ...
	I0819 18:17:42.783087   62137 start.go:246] waiting for cluster config update ...
	I0819 18:17:42.783097   62137 start.go:255] writing updated cluster config ...
	I0819 18:17:42.783349   62137 ssh_runner.go:195] Run: rm -f paused
	I0819 18:17:42.831445   62137 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:17:42.833881   62137 out.go:177] * Done! kubectl is now configured to use "no-preload-233969" cluster and "default" namespace by default
	I0819 18:17:48.653035   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:51.725070   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:57.805043   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:00.881114   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:06.956979   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:09.974002   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:18:09.974108   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:18:09.975602   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:18:09.975650   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:18:09.975736   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:18:09.975861   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:18:09.975993   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:18:09.976086   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:18:09.978023   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:18:09.978100   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:18:09.978157   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:18:09.978230   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:18:09.978281   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:18:09.978358   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:18:09.978408   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:18:09.978466   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:18:09.978529   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:18:09.978645   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:18:09.978758   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:18:09.978816   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:18:09.978890   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:18:09.978973   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:18:09.979046   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:18:09.979138   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:18:09.979191   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:18:09.979339   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:18:09.979438   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:18:09.979503   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:18:09.979595   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:18:10.028995   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:09.981931   63216 out.go:235]   - Booting up control plane ...
	I0819 18:18:09.982014   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:18:09.982087   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:18:09.982142   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:18:09.982213   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:18:09.982378   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:18:09.982432   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:18:09.982491   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982715   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982914   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982996   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983204   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983268   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983424   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983485   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983656   63216 kubeadm.go:310] 
	I0819 18:18:09.983705   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:18:09.983747   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:18:09.983754   63216 kubeadm.go:310] 
	I0819 18:18:09.983788   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:18:09.983818   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:18:09.983957   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:18:09.983982   63216 kubeadm.go:310] 
	I0819 18:18:09.984089   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:18:09.984119   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:18:09.984175   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:18:09.984186   63216 kubeadm.go:310] 
	I0819 18:18:09.984277   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:18:09.984372   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:18:09.984378   63216 kubeadm.go:310] 
	I0819 18:18:09.984474   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:18:09.984552   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:18:09.984621   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:18:09.984699   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:18:09.984762   63216 kubeadm.go:310] 
	W0819 18:18:09.984832   63216 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 18:18:09.984873   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:18:10.439037   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:18:10.453739   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:18:10.463241   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:18:10.463262   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:18:10.463313   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:18:10.472407   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:18:10.472467   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:18:10.481297   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:18:10.489478   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:18:10.489542   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:18:10.498042   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.506373   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:18:10.506433   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.515158   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:18:10.523412   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:18:10.523483   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:18:10.532060   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:18:10.746836   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:18:16.109014   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:19.180970   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:25.261041   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:28.333057   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:34.412966   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:37.485036   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:43.565013   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:46.637059   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:52.716967   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:55.789060   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:01.869005   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:04.941027   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:11.020989   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:14.093067   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:20.173021   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:23.248974   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:29.324961   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:32.397037   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:38.477031   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:41.549001   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:47.629019   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:50.700996   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:56.781035   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:59.853000   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:06.430174   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:20:06.430256   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:20:06.431894   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:20:06.431968   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:20:06.432060   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:20:06.432203   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:20:06.432334   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:20:06.432440   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:20:06.434250   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:20:06.434349   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:20:06.434444   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:20:06.434563   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:20:06.434623   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:20:06.434717   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:20:06.434805   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:20:06.434894   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:20:06.434974   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:20:06.435052   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:20:06.435135   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:20:06.435204   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:20:06.435288   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:20:06.435365   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:20:06.435421   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:20:06.435474   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:20:06.435531   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:20:06.435689   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:20:06.435781   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:20:06.435827   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:20:06.435886   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:20:06.437538   63216 out.go:235]   - Booting up control plane ...
	I0819 18:20:06.437678   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:20:06.437771   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:20:06.437852   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:20:06.437928   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:20:06.438063   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:20:06.438105   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:20:06.438164   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438342   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438416   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438568   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438637   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438821   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438902   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439167   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439264   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439458   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439472   63216 kubeadm.go:310] 
	I0819 18:20:06.439514   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:20:06.439547   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:20:06.439553   63216 kubeadm.go:310] 
	I0819 18:20:06.439583   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:20:06.439626   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:20:06.439732   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:20:06.439749   63216 kubeadm.go:310] 
	I0819 18:20:06.439873   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:20:06.439915   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:20:06.439944   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:20:06.439952   63216 kubeadm.go:310] 
	I0819 18:20:06.440039   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:20:06.440106   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:20:06.440113   63216 kubeadm.go:310] 
	I0819 18:20:06.440252   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:20:06.440329   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:20:06.440392   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:20:06.440458   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:20:06.440521   63216 kubeadm.go:394] duration metric: took 8m2.012853316s to StartCluster
	I0819 18:20:06.440524   63216 kubeadm.go:310] 
	I0819 18:20:06.440559   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:20:06.440610   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:20:06.481255   63216 cri.go:89] found id: ""
	I0819 18:20:06.481285   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.481297   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:20:06.481305   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:20:06.481364   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:20:06.516769   63216 cri.go:89] found id: ""
	I0819 18:20:06.516801   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.516811   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:20:06.516818   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:20:06.516933   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:20:06.551964   63216 cri.go:89] found id: ""
	I0819 18:20:06.551998   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.552006   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:20:06.552014   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:20:06.552108   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:20:06.586084   63216 cri.go:89] found id: ""
	I0819 18:20:06.586115   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.586124   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:20:06.586131   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:20:06.586189   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:20:06.620732   63216 cri.go:89] found id: ""
	I0819 18:20:06.620773   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.620785   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:20:06.620792   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:20:06.620843   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:20:06.659731   63216 cri.go:89] found id: ""
	I0819 18:20:06.659762   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.659772   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:20:06.659780   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:20:06.659846   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:20:06.694223   63216 cri.go:89] found id: ""
	I0819 18:20:06.694257   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.694267   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:20:06.694275   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:20:06.694337   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:20:06.727474   63216 cri.go:89] found id: ""
	I0819 18:20:06.727508   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.727518   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:20:06.727528   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:20:06.727538   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:20:06.778006   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:20:06.778041   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:20:06.792059   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:20:06.792089   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:20:06.863596   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:20:06.863625   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:20:06.863637   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:20:06.979710   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:20:06.979752   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 18:20:07.030879   63216 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 18:20:07.030930   63216 out.go:270] * 
	W0819 18:20:07.031004   63216 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.031025   63216 out.go:270] * 
	W0819 18:20:07.031896   63216 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:20:07.035220   63216 out.go:201] 
	W0819 18:20:07.036384   63216 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.036435   63216 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 18:20:07.036466   63216 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 18:20:07.037783   63216 out.go:201] 
	I0819 18:20:05.933003   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:09.009065   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:15.085040   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:18.160990   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:24.236968   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:27.308959   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:30.310609   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:20:30.310648   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:30.310938   66229 buildroot.go:166] provisioning hostname "embed-certs-306581"
	I0819 18:20:30.310975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:30.311173   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:30.312703   66229 machine.go:96] duration metric: took 4m37.4225796s to provisionDockerMachine
	I0819 18:20:30.312767   66229 fix.go:56] duration metric: took 4m37.446430724s for fixHost
	I0819 18:20:30.312775   66229 start.go:83] releasing machines lock for "embed-certs-306581", held for 4m37.446469265s
	W0819 18:20:30.312789   66229 start.go:714] error starting host: provision: host is not running
	W0819 18:20:30.312878   66229 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 18:20:30.312887   66229 start.go:729] Will try again in 5 seconds ...
	I0819 18:20:35.313124   66229 start.go:360] acquireMachinesLock for embed-certs-306581: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:20:35.313223   66229 start.go:364] duration metric: took 60.186µs to acquireMachinesLock for "embed-certs-306581"
	I0819 18:20:35.313247   66229 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:20:35.313256   66229 fix.go:54] fixHost starting: 
	I0819 18:20:35.313555   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:20:35.313581   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:20:35.330972   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38153
	I0819 18:20:35.331433   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:20:35.331878   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:20:35.331897   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:20:35.332189   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:20:35.332376   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:35.332546   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:20:35.334335   66229 fix.go:112] recreateIfNeeded on embed-certs-306581: state=Stopped err=<nil>
	I0819 18:20:35.334360   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	W0819 18:20:35.334529   66229 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:20:35.336031   66229 out.go:177] * Restarting existing kvm2 VM for "embed-certs-306581" ...
	I0819 18:20:35.337027   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Start
	I0819 18:20:35.337166   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring networks are active...
	I0819 18:20:35.337905   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring network default is active
	I0819 18:20:35.338212   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring network mk-embed-certs-306581 is active
	I0819 18:20:35.338534   66229 main.go:141] libmachine: (embed-certs-306581) Getting domain xml...
	I0819 18:20:35.339265   66229 main.go:141] libmachine: (embed-certs-306581) Creating domain...
	I0819 18:20:36.576142   66229 main.go:141] libmachine: (embed-certs-306581) Waiting to get IP...
	I0819 18:20:36.577067   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:36.577471   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:36.577553   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:36.577459   67882 retry.go:31] will retry after 288.282156ms: waiting for machine to come up
	I0819 18:20:36.866897   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:36.867437   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:36.867507   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:36.867415   67882 retry.go:31] will retry after 357.773556ms: waiting for machine to come up
	I0819 18:20:37.227139   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:37.227672   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:37.227697   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:37.227620   67882 retry.go:31] will retry after 360.777442ms: waiting for machine to come up
	I0819 18:20:37.590245   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:37.590696   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:37.590725   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:37.590672   67882 retry.go:31] will retry after 502.380794ms: waiting for machine to come up
	I0819 18:20:38.094422   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:38.094938   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:38.094963   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:38.094893   67882 retry.go:31] will retry after 716.370935ms: waiting for machine to come up
	I0819 18:20:38.812922   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:38.813416   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:38.813437   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:38.813381   67882 retry.go:31] will retry after 728.320282ms: waiting for machine to come up
	I0819 18:20:39.543316   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:39.543705   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:39.543731   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:39.543668   67882 retry.go:31] will retry after 725.532345ms: waiting for machine to come up
	I0819 18:20:40.270826   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:40.271325   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:40.271347   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:40.271280   67882 retry.go:31] will retry after 1.054064107s: waiting for machine to come up
	I0819 18:20:41.326463   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:41.326952   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:41.326983   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:41.326896   67882 retry.go:31] will retry after 1.258426337s: waiting for machine to come up
	I0819 18:20:42.587252   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:42.587685   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:42.587715   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:42.587645   67882 retry.go:31] will retry after 1.884128664s: waiting for machine to come up
	I0819 18:20:44.474042   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:44.474569   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:44.474592   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:44.474528   67882 retry.go:31] will retry after 2.484981299s: waiting for machine to come up
	I0819 18:20:46.961480   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:46.961991   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:46.962010   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:46.961956   67882 retry.go:31] will retry after 2.912321409s: waiting for machine to come up
	I0819 18:20:49.877938   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:49.878388   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:49.878414   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:49.878347   67882 retry.go:31] will retry after 4.020459132s: waiting for machine to come up
	I0819 18:20:53.901782   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.902239   66229 main.go:141] libmachine: (embed-certs-306581) Found IP for machine: 192.168.72.181
	I0819 18:20:53.902260   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has current primary IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.902266   66229 main.go:141] libmachine: (embed-certs-306581) Reserving static IP address...
	I0819 18:20:53.902757   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "embed-certs-306581", mac: "52:54:00:a4:c5:6a", ip: "192.168.72.181"} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:53.902779   66229 main.go:141] libmachine: (embed-certs-306581) DBG | skip adding static IP to network mk-embed-certs-306581 - found existing host DHCP lease matching {name: "embed-certs-306581", mac: "52:54:00:a4:c5:6a", ip: "192.168.72.181"}
	I0819 18:20:53.902789   66229 main.go:141] libmachine: (embed-certs-306581) Reserved static IP address: 192.168.72.181
	I0819 18:20:53.902800   66229 main.go:141] libmachine: (embed-certs-306581) Waiting for SSH to be available...
	I0819 18:20:53.902808   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Getting to WaitForSSH function...
	I0819 18:20:53.904907   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.905284   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:53.905316   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.905407   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Using SSH client type: external
	I0819 18:20:53.905434   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa (-rw-------)
	I0819 18:20:53.905466   66229 main.go:141] libmachine: (embed-certs-306581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:20:53.905481   66229 main.go:141] libmachine: (embed-certs-306581) DBG | About to run SSH command:
	I0819 18:20:53.905493   66229 main.go:141] libmachine: (embed-certs-306581) DBG | exit 0
	I0819 18:20:54.024614   66229 main.go:141] libmachine: (embed-certs-306581) DBG | SSH cmd err, output: <nil>: 
	I0819 18:20:54.024991   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetConfigRaw
	I0819 18:20:54.025614   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:54.028496   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.028901   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.028935   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.029207   66229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/config.json ...
	I0819 18:20:54.029412   66229 machine.go:93] provisionDockerMachine start ...
	I0819 18:20:54.029430   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:54.029630   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.032073   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.032436   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.032463   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.032647   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.032822   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.033002   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.033136   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.033284   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.033483   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.033498   66229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:20:54.132908   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 18:20:54.132938   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.133214   66229 buildroot.go:166] provisioning hostname "embed-certs-306581"
	I0819 18:20:54.133238   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.133426   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.135967   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.136324   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.136356   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.136507   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.136713   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.136873   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.137028   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.137215   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.137423   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.137437   66229 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-306581 && echo "embed-certs-306581" | sudo tee /etc/hostname
	I0819 18:20:54.250819   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-306581
	
	I0819 18:20:54.250849   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.253776   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.254119   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.254150   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.254351   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.254574   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.254718   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.254872   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.255090   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.255269   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.255286   66229 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-306581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-306581/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-306581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:20:54.361268   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:20:54.361300   66229 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 18:20:54.361328   66229 buildroot.go:174] setting up certificates
	I0819 18:20:54.361342   66229 provision.go:84] configureAuth start
	I0819 18:20:54.361359   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.361630   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:54.364099   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.364511   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.364541   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.364666   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.366912   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.367301   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.367329   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.367447   66229 provision.go:143] copyHostCerts
	I0819 18:20:54.367496   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 18:20:54.367515   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 18:20:54.367586   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 18:20:54.367687   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 18:20:54.367699   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 18:20:54.367737   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 18:20:54.367824   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 18:20:54.367834   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 18:20:54.367860   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 18:20:54.367919   66229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.embed-certs-306581 san=[127.0.0.1 192.168.72.181 embed-certs-306581 localhost minikube]
	I0819 18:20:54.424019   66229 provision.go:177] copyRemoteCerts
	I0819 18:20:54.424075   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:20:54.424096   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.426737   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.426994   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.427016   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.427171   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.427380   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.427523   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.427645   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:54.506517   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:20:54.530454   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 18:20:54.552740   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:20:54.574870   66229 provision.go:87] duration metric: took 213.51055ms to configureAuth
	I0819 18:20:54.574904   66229 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:20:54.575077   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:20:54.575213   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.577946   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.578283   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.578312   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.578484   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.578683   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.578878   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.578993   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.579122   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.579267   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.579281   66229 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:20:54.825788   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:20:54.825815   66229 machine.go:96] duration metric: took 796.390996ms to provisionDockerMachine
	I0819 18:20:54.825826   66229 start.go:293] postStartSetup for "embed-certs-306581" (driver="kvm2")
	I0819 18:20:54.825836   66229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:20:54.825850   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:54.826187   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:20:54.826214   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.829048   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.829433   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.829462   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.829582   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.829819   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.829963   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.830093   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:54.911609   66229 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:20:54.915894   66229 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:20:54.915916   66229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 18:20:54.915979   66229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 18:20:54.916049   66229 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 18:20:54.916134   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:20:54.926185   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:20:54.952362   66229 start.go:296] duration metric: took 126.500839ms for postStartSetup
	I0819 18:20:54.952401   66229 fix.go:56] duration metric: took 19.639145598s for fixHost
	I0819 18:20:54.952420   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.955522   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.955881   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.955909   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.956078   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.956270   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.956450   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.956605   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.956785   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.956940   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.956950   66229 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:20:55.053204   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724091655.030704823
	
	I0819 18:20:55.053229   66229 fix.go:216] guest clock: 1724091655.030704823
	I0819 18:20:55.053237   66229 fix.go:229] Guest: 2024-08-19 18:20:55.030704823 +0000 UTC Remote: 2024-08-19 18:20:54.952405352 +0000 UTC m=+302.228892640 (delta=78.299471ms)
	I0819 18:20:55.053254   66229 fix.go:200] guest clock delta is within tolerance: 78.299471ms
	I0819 18:20:55.053261   66229 start.go:83] releasing machines lock for "embed-certs-306581", held for 19.740028573s
	I0819 18:20:55.053277   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.053530   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:55.056146   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.056523   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.056546   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.056677   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057135   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057320   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057404   66229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:20:55.057445   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:55.057497   66229 ssh_runner.go:195] Run: cat /version.json
	I0819 18:20:55.057518   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:55.059944   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.059969   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060265   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.060296   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060359   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.060407   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060416   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:55.060528   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:55.060605   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:55.060672   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:55.060778   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:55.060838   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:55.060899   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:55.060941   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:55.183438   66229 ssh_runner.go:195] Run: systemctl --version
	I0819 18:20:55.189341   66229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:20:55.330628   66229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:20:55.336807   66229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:20:55.336877   66229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:20:55.351865   66229 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:20:55.351893   66229 start.go:495] detecting cgroup driver to use...
	I0819 18:20:55.351988   66229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:20:55.368983   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:20:55.382795   66229 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:20:55.382848   66229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:20:55.396175   66229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:20:55.409333   66229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:20:55.534054   66229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:20:55.685410   66229 docker.go:233] disabling docker service ...
	I0819 18:20:55.685483   66229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:20:55.699743   66229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:20:55.712425   66229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:20:55.842249   66229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:20:55.964126   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:20:55.978354   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:20:55.995963   66229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:20:55.996032   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.006717   66229 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:20:56.006810   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.017350   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.027098   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.037336   66229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:20:56.047188   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.059128   66229 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.076950   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.087819   66229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:20:56.097922   66229 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:20:56.097980   66229 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:20:56.114569   66229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:20:56.130215   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:20:56.243812   66229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:20:56.376166   66229 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:20:56.376294   66229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:20:56.380916   66229 start.go:563] Will wait 60s for crictl version
	I0819 18:20:56.380973   66229 ssh_runner.go:195] Run: which crictl
	I0819 18:20:56.384492   66229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:20:56.421992   66229 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:20:56.422058   66229 ssh_runner.go:195] Run: crio --version
	I0819 18:20:56.448657   66229 ssh_runner.go:195] Run: crio --version
	I0819 18:20:56.477627   66229 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:20:56.479098   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:56.482364   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:56.482757   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:56.482800   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:56.482997   66229 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 18:20:56.486798   66229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:20:56.498662   66229 kubeadm.go:883] updating cluster {Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:20:56.498820   66229 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:20:56.498890   66229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:20:56.534076   66229 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 18:20:56.534137   66229 ssh_runner.go:195] Run: which lz4
	I0819 18:20:56.537906   66229 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:20:56.541691   66229 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:20:56.541726   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 18:20:57.728202   66229 crio.go:462] duration metric: took 1.190335452s to copy over tarball
	I0819 18:20:57.728263   66229 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:20:59.870389   66229 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.142096936s)
	I0819 18:20:59.870434   66229 crio.go:469] duration metric: took 2.142210052s to extract the tarball
	I0819 18:20:59.870443   66229 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:20:59.907013   66229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:20:59.949224   66229 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:20:59.949244   66229 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:20:59.949257   66229 kubeadm.go:934] updating node { 192.168.72.181 8443 v1.31.0 crio true true} ...
	I0819 18:20:59.949790   66229 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-306581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:20:59.949851   66229 ssh_runner.go:195] Run: crio config
	I0819 18:20:59.993491   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:20:59.993521   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:20:59.993535   66229 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:20:59.993561   66229 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.181 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-306581 NodeName:embed-certs-306581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:20:59.993735   66229 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-306581"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:20:59.993814   66229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:21:00.003488   66229 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:21:00.003563   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:21:00.012546   66229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0819 18:21:00.028546   66229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:21:00.044037   66229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0819 18:21:00.059422   66229 ssh_runner.go:195] Run: grep 192.168.72.181	control-plane.minikube.internal$ /etc/hosts
	I0819 18:21:00.062992   66229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:21:00.075172   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:21:00.213050   66229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:21:00.230086   66229 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581 for IP: 192.168.72.181
	I0819 18:21:00.230114   66229 certs.go:194] generating shared ca certs ...
	I0819 18:21:00.230135   66229 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:21:00.230303   66229 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 18:21:00.230371   66229 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 18:21:00.230386   66229 certs.go:256] generating profile certs ...
	I0819 18:21:00.230506   66229 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/client.key
	I0819 18:21:00.230593   66229 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.key.cf6a9e5e
	I0819 18:21:00.230652   66229 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.key
	I0819 18:21:00.230819   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 18:21:00.230863   66229 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 18:21:00.230877   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:21:00.230912   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:21:00.230951   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:21:00.230985   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 18:21:00.231053   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:21:00.231968   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:21:00.265793   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:21:00.292911   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:21:00.333617   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:21:00.361258   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 18:21:00.394711   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:21:00.417880   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:21:00.440771   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:21:00.464416   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 18:21:00.489641   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 18:21:00.512135   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:21:00.535608   66229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:21:00.552131   66229 ssh_runner.go:195] Run: openssl version
	I0819 18:21:00.557821   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 18:21:00.568710   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.573178   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.573239   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.578820   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:21:00.589649   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:21:00.600652   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.604986   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.605049   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.610552   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:21:00.620514   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 18:21:00.630217   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.634541   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.634599   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.639839   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 18:21:00.649821   66229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:21:00.654288   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:21:00.660071   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:21:00.665354   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:21:00.670791   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:21:00.676451   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:21:00.682099   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:21:00.687792   66229 kubeadm.go:392] StartCluster: {Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:21:00.687869   66229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:21:00.687914   66229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:21:00.730692   66229 cri.go:89] found id: ""
	I0819 18:21:00.730762   66229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:21:00.740607   66229 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 18:21:00.740627   66229 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 18:21:00.740687   66229 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 18:21:00.750127   66229 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:21:00.751927   66229 kubeconfig.go:125] found "embed-certs-306581" server: "https://192.168.72.181:8443"
	I0819 18:21:00.754865   66229 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 18:21:00.764102   66229 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.181
	I0819 18:21:00.764130   66229 kubeadm.go:1160] stopping kube-system containers ...
	I0819 18:21:00.764142   66229 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 18:21:00.764210   66229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:21:00.797866   66229 cri.go:89] found id: ""
	I0819 18:21:00.797939   66229 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 18:21:00.815065   66229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:21:00.824279   66229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:21:00.824297   66229 kubeadm.go:157] found existing configuration files:
	
	I0819 18:21:00.824336   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:21:00.832688   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:21:00.832766   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:21:00.841795   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:21:00.852300   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:21:00.852358   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:21:00.862973   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:21:00.873195   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:21:00.873243   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:21:00.882559   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:21:00.892687   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:21:00.892774   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:21:00.903746   66229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:21:00.913161   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:01.017511   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:01.829503   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.047620   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.105126   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.157817   66229 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:21:02.157927   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:02.658716   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:03.158468   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:03.658865   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:04.157979   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:04.175682   66229 api_server.go:72] duration metric: took 2.017872037s to wait for apiserver process to appear ...
	I0819 18:21:04.175711   66229 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:21:04.175731   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.251226   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 18:21:07.251253   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 18:21:07.251265   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.290762   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 18:21:07.290788   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 18:21:07.676347   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.695167   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:21:07.695220   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:21:08.176382   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:08.183772   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:21:08.183816   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:21:08.676435   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:08.680898   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 200:
	ok
	I0819 18:21:08.686996   66229 api_server.go:141] control plane version: v1.31.0
	I0819 18:21:08.687023   66229 api_server.go:131] duration metric: took 4.511304673s to wait for apiserver health ...
	I0819 18:21:08.687031   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:21:08.687037   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:21:08.688988   66229 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:21:08.690213   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:21:08.701051   66229 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:21:08.719754   66229 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:21:08.732139   66229 system_pods.go:59] 8 kube-system pods found
	I0819 18:21:08.732172   66229 system_pods.go:61] "coredns-6f6b679f8f-222n6" [1d55fb75-011d-4517-8601-b55ff22d0fe1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 18:21:08.732179   66229 system_pods.go:61] "etcd-embed-certs-306581" [0b299b0b-00ec-45d6-9e5f-6f8677734138] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 18:21:08.732187   66229 system_pods.go:61] "kube-apiserver-embed-certs-306581" [c0342f0d-3e9b-4118-abcb-e6585ec8205c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 18:21:08.732192   66229 system_pods.go:61] "kube-controller-manager-embed-certs-306581" [3e8441b3-f3cc-4e0b-9e9b-2dc1fd41ca1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 18:21:08.732196   66229 system_pods.go:61] "kube-proxy-4vt6x" [559e4638-9505-4d7f-b84e-77b813c84ab4] Running
	I0819 18:21:08.732204   66229 system_pods.go:61] "kube-scheduler-embed-certs-306581" [39ec99a8-3e38-40f6-af5e-02a437573bd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 18:21:08.732210   66229 system_pods.go:61] "metrics-server-6867b74b74-dmpfh" [0edd2d8d-aa29-4817-babb-09e185fc0578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:21:08.732213   66229 system_pods.go:61] "storage-provisioner" [f267a05a-418f-49a9-b09d-a6330ffa4abf] Running
	I0819 18:21:08.732219   66229 system_pods.go:74] duration metric: took 12.445292ms to wait for pod list to return data ...
	I0819 18:21:08.732226   66229 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:21:08.735979   66229 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:21:08.736004   66229 node_conditions.go:123] node cpu capacity is 2
	I0819 18:21:08.736015   66229 node_conditions.go:105] duration metric: took 3.784963ms to run NodePressure ...
	I0819 18:21:08.736029   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:08.995746   66229 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 18:21:09.001567   66229 kubeadm.go:739] kubelet initialised
	I0819 18:21:09.001592   66229 kubeadm.go:740] duration metric: took 5.816928ms waiting for restarted kubelet to initialise ...
	I0819 18:21:09.001603   66229 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:21:09.006253   66229 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:11.015091   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:13.512551   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:15.512696   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:16.513342   66229 pod_ready.go:93] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:16.513387   66229 pod_ready.go:82] duration metric: took 7.507092015s for pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:16.513404   66229 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.519842   66229 pod_ready.go:93] pod "etcd-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:17.519864   66229 pod_ready.go:82] duration metric: took 1.006452738s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.519873   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.524383   66229 pod_ready.go:93] pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:17.524401   66229 pod_ready.go:82] duration metric: took 4.522465ms for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.524411   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:19.536012   66229 pod_ready.go:103] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:22.030530   66229 pod_ready.go:103] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:23.530792   66229 pod_ready.go:93] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.530818   66229 pod_ready.go:82] duration metric: took 6.006401322s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.530828   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4vt6x" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.535011   66229 pod_ready.go:93] pod "kube-proxy-4vt6x" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.535030   66229 pod_ready.go:82] duration metric: took 4.196825ms for pod "kube-proxy-4vt6x" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.535038   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.538712   66229 pod_ready.go:93] pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.538731   66229 pod_ready.go:82] duration metric: took 3.686091ms for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.538743   66229 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:25.545068   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:28.044531   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:30.044724   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:32.545647   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:35.044620   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:37.044937   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:39.045319   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:41.545155   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:43.545946   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:46.045829   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:48.544436   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:50.546582   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:53.045122   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:55.544595   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:57.544701   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:00.044887   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:02.044950   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:04.544241   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:06.546130   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:09.044418   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:11.045634   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:13.545020   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:16.045408   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:18.544890   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:21.044294   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:23.045251   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:25.545598   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:27.545703   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:30.044377   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:32.045041   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:34.045316   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:36.045466   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:38.543870   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:40.544216   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:42.545271   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:45.044619   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:47.045364   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:49.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:51.045992   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:53.544682   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:56.045091   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:58.045324   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:00.046083   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:02.545541   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:05.045078   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:07.544235   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:09.545586   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:12.045449   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:14.545054   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:16.545253   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:19.044239   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:21.045012   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:23.045831   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:25.545703   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:28.045069   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:30.045417   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:32.545986   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:35.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:37.545427   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:39.545715   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:42.046173   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:44.545426   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:46.545560   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:48.546489   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:51.044803   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:53.044925   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:55.544871   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:57.545044   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:00.044157   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:02.045599   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:04.546054   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:07.044956   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:09.044993   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:11.045233   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:13.046097   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:15.046223   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:17.544258   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:19.545890   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:22.044892   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:24.045926   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:26.545100   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:29.044231   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:31.044942   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:33.545660   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:36.045482   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:38.545467   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:40.545731   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:43.045524   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:45.545299   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:48.044040   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:50.044556   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:52.046009   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:54.545370   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:57.044344   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:59.544590   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:02.045528   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:04.546831   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:07.045865   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:09.544718   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:12.044142   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:14.045777   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:16.048107   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.235379994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091918235354385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8156cc4d-9e16-4666-bc7e-5dbbbffcbb2c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.235922020Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6068bf4-650e-4cf3-ac36-c997eab9b17b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.236006234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6068bf4-650e-4cf3-ac36-c997eab9b17b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.236207516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091140340485700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9079ae27322395aa6d03df6f194c5a32b3895e4b42da68f5b6297d2db376d4a,PodSandboxId:92b342207b58a06c33af84e980e7a11badb7d4df61421f3fce2124ae3da43ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724091120060473682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eeb80fff-1a91-4f45-8a17-c66d1da6882f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355,PodSandboxId:256397ebb865f86b780b9dd52ff6e1d901cb949ec399c52157c681e5ec2ddbb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091117191314679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4jvnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d81201db-1102-436b-ac29-dd201584de2d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0,PodSandboxId:2c062223259f3567ba7b0844f4e476e33bd97963755f6fca0557ccf2f940280d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091109556723661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4x48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f5fe5-0
70e-419c-a9bb-5b95f7496717,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091109532726871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-
71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb,PodSandboxId:210d84764ce9ccaa24e3553d5d1d0ff84aee0fcf0adb810b494e7eb365e48fb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091105515152518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfe4aef9394218cdf84b96908c10b892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117,PodSandboxId:c1dd8bd99022f227c04ed00e33789c7ed99a4519de3505fd4847ca07e2aab70c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091105515649930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd10fbfb40091d8bfaeebfbc7ba
fa5e5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2,PodSandboxId:45504ed40a59ec46b898039a0c4979ee6b434cb2ec820414e886e559b1d29ad0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091105496296864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56ffb1c4e951022e35868f5704d1a
06a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784,PodSandboxId:068e579a79a5648f54ed1af6c7432fbc7c59bf46572587788e70a9e81f4609b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091105529383732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367a35f849068c9dd6ee7c2eb24f15c
9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6068bf4-650e-4cf3-ac36-c997eab9b17b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.269941921Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcdb87fe-fbec-4701-9584-c29334db7a48 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.270029085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcdb87fe-fbec-4701-9584-c29334db7a48 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.270923159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9a8c438-c6d1-4074-85cd-de4dffb0da87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.271344292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091918271320426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9a8c438-c6d1-4074-85cd-de4dffb0da87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.271912609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b30b669-338f-4092-9787-90a006e6150b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.271983524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b30b669-338f-4092-9787-90a006e6150b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.272200152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091140340485700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9079ae27322395aa6d03df6f194c5a32b3895e4b42da68f5b6297d2db376d4a,PodSandboxId:92b342207b58a06c33af84e980e7a11badb7d4df61421f3fce2124ae3da43ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724091120060473682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eeb80fff-1a91-4f45-8a17-c66d1da6882f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355,PodSandboxId:256397ebb865f86b780b9dd52ff6e1d901cb949ec399c52157c681e5ec2ddbb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091117191314679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4jvnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d81201db-1102-436b-ac29-dd201584de2d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0,PodSandboxId:2c062223259f3567ba7b0844f4e476e33bd97963755f6fca0557ccf2f940280d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091109556723661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4x48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f5fe5-0
70e-419c-a9bb-5b95f7496717,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091109532726871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-
71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb,PodSandboxId:210d84764ce9ccaa24e3553d5d1d0ff84aee0fcf0adb810b494e7eb365e48fb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091105515152518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfe4aef9394218cdf84b96908c10b892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117,PodSandboxId:c1dd8bd99022f227c04ed00e33789c7ed99a4519de3505fd4847ca07e2aab70c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091105515649930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd10fbfb40091d8bfaeebfbc7ba
fa5e5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2,PodSandboxId:45504ed40a59ec46b898039a0c4979ee6b434cb2ec820414e886e559b1d29ad0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091105496296864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56ffb1c4e951022e35868f5704d1a
06a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784,PodSandboxId:068e579a79a5648f54ed1af6c7432fbc7c59bf46572587788e70a9e81f4609b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091105529383732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367a35f849068c9dd6ee7c2eb24f15c
9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b30b669-338f-4092-9787-90a006e6150b name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.307055192Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b53ccf1f-c128-4b69-a9cc-1c57ca90915a name=/runtime.v1.RuntimeService/Version
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.307152971Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b53ccf1f-c128-4b69-a9cc-1c57ca90915a name=/runtime.v1.RuntimeService/Version
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.308110783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af6e0d37-2a12-4e8b-b1a8-44537b266f15 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.308546818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091918308522730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af6e0d37-2a12-4e8b-b1a8-44537b266f15 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.309123261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b4483ce-a287-45fc-9d7b-809b16f718e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.309206623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b4483ce-a287-45fc-9d7b-809b16f718e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.309450562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091140340485700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9079ae27322395aa6d03df6f194c5a32b3895e4b42da68f5b6297d2db376d4a,PodSandboxId:92b342207b58a06c33af84e980e7a11badb7d4df61421f3fce2124ae3da43ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724091120060473682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eeb80fff-1a91-4f45-8a17-c66d1da6882f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355,PodSandboxId:256397ebb865f86b780b9dd52ff6e1d901cb949ec399c52157c681e5ec2ddbb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091117191314679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4jvnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d81201db-1102-436b-ac29-dd201584de2d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0,PodSandboxId:2c062223259f3567ba7b0844f4e476e33bd97963755f6fca0557ccf2f940280d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091109556723661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4x48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f5fe5-0
70e-419c-a9bb-5b95f7496717,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091109532726871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-
71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb,PodSandboxId:210d84764ce9ccaa24e3553d5d1d0ff84aee0fcf0adb810b494e7eb365e48fb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091105515152518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfe4aef9394218cdf84b96908c10b892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117,PodSandboxId:c1dd8bd99022f227c04ed00e33789c7ed99a4519de3505fd4847ca07e2aab70c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091105515649930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd10fbfb40091d8bfaeebfbc7ba
fa5e5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2,PodSandboxId:45504ed40a59ec46b898039a0c4979ee6b434cb2ec820414e886e559b1d29ad0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091105496296864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56ffb1c4e951022e35868f5704d1a
06a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784,PodSandboxId:068e579a79a5648f54ed1af6c7432fbc7c59bf46572587788e70a9e81f4609b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091105529383732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367a35f849068c9dd6ee7c2eb24f15c
9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b4483ce-a287-45fc-9d7b-809b16f718e4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.341170663Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ac64930-646f-40ee-b6dc-ae971ee44c6d name=/runtime.v1.RuntimeService/Version
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.341272478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ac64930-646f-40ee-b6dc-ae971ee44c6d name=/runtime.v1.RuntimeService/Version
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.342433590Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e3535b9-a28b-420f-961b-0ff7c8af1c25 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.342933003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091918342907764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e3535b9-a28b-420f-961b-0ff7c8af1c25 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.343383582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74bf393e-e794-45a7-b979-963509b27f6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.343451072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74bf393e-e794-45a7-b979-963509b27f6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:25:18 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:25:18.344372573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091140340485700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9079ae27322395aa6d03df6f194c5a32b3895e4b42da68f5b6297d2db376d4a,PodSandboxId:92b342207b58a06c33af84e980e7a11badb7d4df61421f3fce2124ae3da43ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724091120060473682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eeb80fff-1a91-4f45-8a17-c66d1da6882f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355,PodSandboxId:256397ebb865f86b780b9dd52ff6e1d901cb949ec399c52157c681e5ec2ddbb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091117191314679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4jvnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d81201db-1102-436b-ac29-dd201584de2d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0,PodSandboxId:2c062223259f3567ba7b0844f4e476e33bd97963755f6fca0557ccf2f940280d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091109556723661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4x48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f5fe5-0
70e-419c-a9bb-5b95f7496717,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091109532726871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-
71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb,PodSandboxId:210d84764ce9ccaa24e3553d5d1d0ff84aee0fcf0adb810b494e7eb365e48fb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091105515152518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfe4aef9394218cdf84b96908c10b892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117,PodSandboxId:c1dd8bd99022f227c04ed00e33789c7ed99a4519de3505fd4847ca07e2aab70c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091105515649930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd10fbfb40091d8bfaeebfbc7ba
fa5e5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2,PodSandboxId:45504ed40a59ec46b898039a0c4979ee6b434cb2ec820414e886e559b1d29ad0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091105496296864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56ffb1c4e951022e35868f5704d1a
06a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784,PodSandboxId:068e579a79a5648f54ed1af6c7432fbc7c59bf46572587788e70a9e81f4609b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091105529383732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367a35f849068c9dd6ee7c2eb24f15c
9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74bf393e-e794-45a7-b979-963509b27f6e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c836b0235de70       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   aac9a42aaca67       storage-provisioner
	b9079ae273223       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   92b342207b58a       busybox
	85dd74b0050d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   256397ebb865f       coredns-6f6b679f8f-4jvnz
	eb30ed4fd51a8       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   2c062223259f3       kube-proxy-j4x48
	cef2e9a618dd4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   aac9a42aaca67       storage-provisioner
	d5fff05f93c77       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   068e579a79a56       kube-apiserver-default-k8s-diff-port-813424
	8832533edf13e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   c1dd8bd99022f       etcd-default-k8s-diff-port-813424
	faf8db92753dd       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   210d84764ce9c       kube-controller-manager-default-k8s-diff-port-813424
	93344a9847519       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   45504ed40a59e       kube-scheduler-default-k8s-diff-port-813424
	
	
	==> coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33584 - 26231 "HINFO IN 7158233729066554603.5883956134227833022. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012419666s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-813424
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-813424
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=default-k8s-diff-port-813424
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_03_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:03:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-813424
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:25:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:22:30 +0000   Mon, 19 Aug 2024 18:03:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:22:30 +0000   Mon, 19 Aug 2024 18:03:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:22:30 +0000   Mon, 19 Aug 2024 18:03:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:22:30 +0000   Mon, 19 Aug 2024 18:11:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.243
	  Hostname:    default-k8s-diff-port-813424
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08e182672dc747df8c1f0d4f4aaaa876
	  System UUID:                08e18267-2dc7-47df-8c1f-0d4f4aaaa876
	  Boot ID:                    765fbb80-de14-4300-a592-1edf16df4bf0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-6f6b679f8f-4jvnz                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-813424                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-813424             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-813424    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-j4x48                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-813424             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-tp742                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-813424 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-813424 event: Registered Node default-k8s-diff-port-813424 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-813424 event: Registered Node default-k8s-diff-port-813424 in Controller
	
	
	==> dmesg <==
	[Aug19 18:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051247] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037844] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.853515] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.893637] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.531463] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.425621] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.058498] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057092] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.194710] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.149379] systemd-fstab-generator[689]: Ignoring "noauto" option for root device
	[  +0.301090] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +4.015662] systemd-fstab-generator[815]: Ignoring "noauto" option for root device
	[  +2.027733] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +0.058613] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.531889] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.901349] systemd-fstab-generator[1570]: Ignoring "noauto" option for root device
	[  +3.759782] kauditd_printk_skb: 64 callbacks suppressed
	[Aug19 18:12] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] <==
	{"level":"warn","ts":"2024-08-19T18:12:03.723428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.609829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424\" ","response":"range_response_count:1 size:7113"}
	{"level":"info","ts":"2024-08-19T18:12:03.723449Z","caller":"traceutil/trace.go:171","msg":"trace[1968580330] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424; range_end:; response_count:1; response_revision:627; }","duration":"177.642919ms","start":"2024-08-19T18:12:03.545798Z","end":"2024-08-19T18:12:03.723441Z","steps":["trace[1968580330] 'agreement among raft nodes before linearized reading'  (duration: 177.523849ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:12:04.604249Z","caller":"traceutil/trace.go:171","msg":"trace[312940108] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"293.345621ms","start":"2024-08-19T18:12:04.310883Z","end":"2024-08-19T18:12:04.604229Z","steps":["trace[312940108] 'process raft request'  (duration: 293.00309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:12:05.317764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"403.499455ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14852749988437571731 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424\" mod_revision:628 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424\" value_size:6828 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-19T18:12:05.318049Z","caller":"traceutil/trace.go:171","msg":"trace[486514939] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"698.691302ms","start":"2024-08-19T18:12:04.619346Z","end":"2024-08-19T18:12:05.318037Z","steps":["trace[486514939] 'process raft request'  (duration: 294.126023ms)","trace[486514939] 'compare'  (duration: 403.362192ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:12:05.318297Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:12:04.619312Z","time spent":"698.934318ms","remote":"127.0.0.1:47700","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6906,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424\" mod_revision:628 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424\" value_size:6828 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424\" > >"}
	{"level":"info","ts":"2024-08-19T18:12:05.319885Z","caller":"traceutil/trace.go:171","msg":"trace[1237329407] linearizableReadLoop","detail":"{readStateIndex:669; appliedIndex:668; }","duration":"440.86588ms","start":"2024-08-19T18:12:04.877095Z","end":"2024-08-19T18:12:05.317961Z","steps":["trace[1237329407] 'read index received'  (duration: 36.279475ms)","trace[1237329407] 'applied index is now lower than readState.Index'  (duration: 404.585073ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:12:05.320334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.598268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424\" ","response":"range_response_count:1 size:6921"}
	{"level":"info","ts":"2024-08-19T18:12:05.320376Z","caller":"traceutil/trace.go:171","msg":"trace[256885946] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424; range_end:; response_count:1; response_revision:629; }","duration":"273.64385ms","start":"2024-08-19T18:12:05.046723Z","end":"2024-08-19T18:12:05.320367Z","steps":["trace[256885946] 'agreement among raft nodes before linearized reading'  (duration: 273.511679ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:12:21.453414Z","caller":"traceutil/trace.go:171","msg":"trace[563747473] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"133.902294ms","start":"2024-08-19T18:12:21.319495Z","end":"2024-08-19T18:12:21.453397Z","steps":["trace[563747473] 'process raft request'  (duration: 133.547495ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:12:29.451845Z","caller":"traceutil/trace.go:171","msg":"trace[1743090959] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"123.374319ms","start":"2024-08-19T18:12:29.328452Z","end":"2024-08-19T18:12:29.451826Z","steps":["trace[1743090959] 'process raft request'  (duration: 123.13376ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:12:49.836314Z","caller":"traceutil/trace.go:171","msg":"trace[110017646] transaction","detail":"{read_only:false; response_revision:670; number_of_response:1; }","duration":"104.688583ms","start":"2024-08-19T18:12:49.731591Z","end":"2024-08-19T18:12:49.836280Z","steps":["trace[110017646] 'process raft request'  (duration: 104.559864ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:12:50.039629Z","caller":"traceutil/trace.go:171","msg":"trace[655742769] linearizableReadLoop","detail":"{readStateIndex:721; appliedIndex:719; }","duration":"246.39449ms","start":"2024-08-19T18:12:49.793221Z","end":"2024-08-19T18:12:50.039615Z","steps":["trace[655742769] 'read index received'  (duration: 43.004084ms)","trace[655742769] 'applied index is now lower than readState.Index'  (duration: 203.389707ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:12:50.039927Z","caller":"traceutil/trace.go:171","msg":"trace[25954473] transaction","detail":"{read_only:false; response_revision:671; number_of_response:1; }","duration":"282.096082ms","start":"2024-08-19T18:12:49.757819Z","end":"2024-08-19T18:12:50.039915Z","steps":["trace[25954473] 'process raft request'  (duration: 218.528197ms)","trace[25954473] 'compare'  (duration: 63.131367ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:12:50.040160Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.87744ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T18:12:50.040210Z","caller":"traceutil/trace.go:171","msg":"trace[1042123150] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:671; }","duration":"246.986079ms","start":"2024-08-19T18:12:49.793217Z","end":"2024-08-19T18:12:50.040203Z","steps":["trace[1042123150] 'agreement among raft nodes before linearized reading'  (duration: 246.851289ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:12:50.040409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.914955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-tp742\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-08-19T18:12:50.040645Z","caller":"traceutil/trace.go:171","msg":"trace[248723877] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-tp742; range_end:; response_count:1; response_revision:671; }","duration":"186.151603ms","start":"2024-08-19T18:12:49.854485Z","end":"2024-08-19T18:12:50.040636Z","steps":["trace[248723877] 'agreement among raft nodes before linearized reading'  (duration: 185.833346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:21:02.786428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.499442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.786998Z","caller":"traceutil/trace.go:171","msg":"trace[1203999748] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1080; }","duration":"180.128019ms","start":"2024-08-19T18:21:02.606840Z","end":"2024-08-19T18:21:02.786968Z","steps":["trace[1203999748] 'range keys from in-memory index tree'  (duration: 179.37357ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:21:02.786428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.060274ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.787126Z","caller":"traceutil/trace.go:171","msg":"trace[1957312397] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1080; }","duration":"172.788802ms","start":"2024-08-19T18:21:02.614324Z","end":"2024-08-19T18:21:02.787113Z","steps":["trace[1957312397] 'range keys from in-memory index tree'  (duration: 172.049921ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:21:47.186196Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":872}
	{"level":"info","ts":"2024-08-19T18:21:47.195891Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":872,"took":"9.38176ms","hash":1799941142,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2609152,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-08-19T18:21:47.195944Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1799941142,"revision":872,"compact-revision":-1}
	
	
	==> kernel <==
	 18:25:18 up 13 min,  0 users,  load average: 0.00, 0.10, 0.15
	Linux default-k8s-diff-port-813424 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 18:21:49.385635       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:21:49.385924       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 18:21:49.387047       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:21:49.387090       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:22:49.388230       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:22:49.388656       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 18:22:49.388875       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:22:49.388939       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 18:22:49.390034       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:22:49.390053       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:24:49.390789       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:24:49.391158       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 18:24:49.390826       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:24:49.391300       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 18:24:49.392443       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:24:49.392519       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] <==
	E0819 18:19:52.005273       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:19:52.467048       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:20:22.011570       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:20:22.475320       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:20:52.018469       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:20:52.482863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:21:22.025096       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:21:22.491858       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:21:52.031381       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:21:52.499815       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:22:22.037952       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:22:22.509797       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:22:30.401933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-813424"
	E0819 18:22:52.044754       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:22:52.518573       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:22:53.147986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="362.95µs"
	I0819 18:23:08.145980       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="244.637µs"
	E0819 18:23:22.051021       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:23:22.525954       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:23:52.057797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:23:52.533584       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:24:22.064428       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:24:22.542269       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:24:52.070456       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:24:52.550650       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:11:49.782553       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:11:49.795398       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.243"]
	E0819 18:11:49.795470       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:11:49.848847       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:11:49.848887       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:11:49.848915       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:11:49.854360       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:11:49.854812       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:11:49.854839       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:11:49.856704       1 config.go:197] "Starting service config controller"
	I0819 18:11:49.856762       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:11:49.856797       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:11:49.856802       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:11:49.857270       1 config.go:326] "Starting node config controller"
	I0819 18:11:49.857295       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:11:49.957205       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:11:49.957269       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:11:49.957513       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] <==
	I0819 18:11:46.769799       1 serving.go:386] Generated self-signed cert in-memory
	W0819 18:11:48.326857       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 18:11:48.326900       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 18:11:48.326911       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 18:11:48.326919       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 18:11:48.400892       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 18:11:48.402726       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:11:48.406393       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 18:11:48.406527       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 18:11:48.406580       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 18:11:48.406646       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 18:11:48.507462       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:24:11 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:11.128938     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:24:14 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:14.337899     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091854337147196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:24:14 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:14.338249     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091854337147196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:24:22 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:22.131755     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:24:24 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:24.340611     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091864340112273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:24:24 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:24.340993     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091864340112273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:24:34 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:34.343034     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091874342537741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:24:34 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:34.343076     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091874342537741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:24:35 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:35.127865     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:24:44 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:44.146818     942 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:24:44 default-k8s-diff-port-813424 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:24:44 default-k8s-diff-port-813424 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:24:44 default-k8s-diff-port-813424 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:24:44 default-k8s-diff-port-813424 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:24:44 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:44.344873     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091884344434793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:24:44 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:44.344922     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091884344434793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:24:48 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:48.128163     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:24:54 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:54.346183     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091894345909293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:24:54 default-k8s-diff-port-813424 kubelet[942]: E0819 18:24:54.346208     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091894345909293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:25:02 default-k8s-diff-port-813424 kubelet[942]: E0819 18:25:02.129163     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:25:04 default-k8s-diff-port-813424 kubelet[942]: E0819 18:25:04.347832     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091904347406786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:25:04 default-k8s-diff-port-813424 kubelet[942]: E0819 18:25:04.347855     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091904347406786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:25:14 default-k8s-diff-port-813424 kubelet[942]: E0819 18:25:14.130333     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:25:14 default-k8s-diff-port-813424 kubelet[942]: E0819 18:25:14.349994     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091914349582347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:25:14 default-k8s-diff-port-813424 kubelet[942]: E0819 18:25:14.350030     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091914349582347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] <==
	I0819 18:12:20.427542       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:12:20.437481       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:12:20.437625       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:12:37.837007       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:12:37.837186       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-813424_1e85614c-1b80-49ff-b874-f378ba5f5dcb!
	I0819 18:12:37.838653       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1aa00ed4-3110-4122-8d29-2b0fbcbbcd49", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-813424_1e85614c-1b80-49ff-b874-f378ba5f5dcb became leader
	I0819 18:12:37.938118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-813424_1e85614c-1b80-49ff-b874-f378ba5f5dcb!
	
	
	==> storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] <==
	I0819 18:11:49.635529       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 18:12:19.639408       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-813424 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tp742
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-813424 describe pod metrics-server-6867b74b74-tp742
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-813424 describe pod metrics-server-6867b74b74-tp742: exit status 1 (59.634573ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tp742" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-813424 describe pod metrics-server-6867b74b74-tp742: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0819 18:18:15.961743   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:19:39.030934   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-233969 -n no-preload-233969
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-19 18:26:43.34166082 +0000 UTC m=+5665.045828707
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233969 -n no-preload-233969
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-233969 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-233969 logs -n 25: (1.244818675s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-975771                              | cert-expiration-975771       | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:06 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-233969                  | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-233969                                   | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233045             | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079123        | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233045                  | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-813424       | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:16 UTC |
	|         | default-k8s-diff-port-813424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079123             | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-233045 image list                           | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-814719 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | disable-driver-mounts-814719                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-306581            | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC | 19 Aug 24 18:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-306581                 | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC | 19 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:15:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:15:52.756356   66229 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:15:52.756664   66229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:15:52.756675   66229 out.go:358] Setting ErrFile to fd 2...
	I0819 18:15:52.756680   66229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:15:52.756881   66229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:15:52.757409   66229 out.go:352] Setting JSON to false
	I0819 18:15:52.758366   66229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7098,"bootTime":1724084255,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:15:52.758430   66229 start.go:139] virtualization: kvm guest
	I0819 18:15:52.760977   66229 out.go:177] * [embed-certs-306581] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:15:52.762479   66229 notify.go:220] Checking for updates...
	I0819 18:15:52.762504   66229 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:15:52.763952   66229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:15:52.765453   66229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:15:52.766810   66229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:15:52.768135   66229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:15:52.769369   66229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:15:52.771017   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:52.771443   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.771504   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.786463   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0819 18:15:52.786925   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.787501   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.787523   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.787800   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.787975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.788239   66229 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:15:52.788527   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.788562   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.803703   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0819 18:15:52.804145   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.804609   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.804625   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.804962   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.805142   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.842707   66229 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:15:52.844070   66229 start.go:297] selected driver: kvm2
	I0819 18:15:52.844092   66229 start.go:901] validating driver "kvm2" against &{Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:52.844258   66229 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:15:52.844998   66229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:52.845085   66229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:15:52.860606   66229 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:15:52.861678   66229 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:15:52.861730   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:15:52.861742   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:15:52.861793   66229 start.go:340] cluster config:
	{Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:52.862003   66229 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:52.864173   66229 out.go:177] * Starting "embed-certs-306581" primary control-plane node in "embed-certs-306581" cluster
	I0819 18:15:52.865772   66229 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:15:52.865819   66229 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:15:52.865827   66229 cache.go:56] Caching tarball of preloaded images
	I0819 18:15:52.865902   66229 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:15:52.865913   66229 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:15:52.866012   66229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/config.json ...
	I0819 18:15:52.866250   66229 start.go:360] acquireMachinesLock for embed-certs-306581: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:15:52.866299   66229 start.go:364] duration metric: took 26.7µs to acquireMachinesLock for "embed-certs-306581"
	I0819 18:15:52.866311   66229 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:15:52.866316   66229 fix.go:54] fixHost starting: 
	I0819 18:15:52.866636   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.866671   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.883154   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0819 18:15:52.883648   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.884149   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.884170   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.884509   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.884710   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.884888   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:15:52.886632   66229 fix.go:112] recreateIfNeeded on embed-certs-306581: state=Running err=<nil>
	W0819 18:15:52.886653   66229 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:15:52.888856   66229 out.go:177] * Updating the running kvm2 "embed-certs-306581" VM ...
	I0819 18:15:50.375775   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:52.376597   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:50.455083   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:50.467702   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:50.467768   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:50.517276   63216 cri.go:89] found id: ""
	I0819 18:15:50.517306   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.517315   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:50.517323   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:50.517399   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:50.550878   63216 cri.go:89] found id: ""
	I0819 18:15:50.550905   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.550914   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:50.550921   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:50.550984   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:50.583515   63216 cri.go:89] found id: ""
	I0819 18:15:50.583543   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.583553   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:50.583560   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:50.583622   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:50.618265   63216 cri.go:89] found id: ""
	I0819 18:15:50.618291   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.618299   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:50.618304   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:50.618362   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:50.653436   63216 cri.go:89] found id: ""
	I0819 18:15:50.653461   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.653469   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:50.653476   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:50.653534   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:50.687715   63216 cri.go:89] found id: ""
	I0819 18:15:50.687745   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.687757   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:50.687764   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:50.687885   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:50.721235   63216 cri.go:89] found id: ""
	I0819 18:15:50.721262   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.721272   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:50.721280   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:50.721328   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:50.754095   63216 cri.go:89] found id: ""
	I0819 18:15:50.754126   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.754134   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:50.754143   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:50.754156   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:50.805661   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:50.805698   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:50.819495   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:50.819536   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:50.887296   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:50.887317   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:50.887334   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:50.966224   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:50.966261   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.508007   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:53.520812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:53.520870   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:53.552790   63216 cri.go:89] found id: ""
	I0819 18:15:53.552816   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.552823   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:53.552829   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:53.552873   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:53.585937   63216 cri.go:89] found id: ""
	I0819 18:15:53.585969   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.585978   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:53.585986   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:53.586057   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:53.618890   63216 cri.go:89] found id: ""
	I0819 18:15:53.618915   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.618922   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:53.618928   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:53.618975   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:53.650045   63216 cri.go:89] found id: ""
	I0819 18:15:53.650069   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.650076   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:53.650082   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:53.650138   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:53.685069   63216 cri.go:89] found id: ""
	I0819 18:15:53.685097   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.685106   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:53.685113   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:53.685179   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:53.717742   63216 cri.go:89] found id: ""
	I0819 18:15:53.717771   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.717778   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:53.717784   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:53.717832   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:53.747768   63216 cri.go:89] found id: ""
	I0819 18:15:53.747798   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.747806   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:53.747812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:53.747858   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:53.779973   63216 cri.go:89] found id: ""
	I0819 18:15:53.779999   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.780006   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:53.780016   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:53.780027   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.815619   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:53.815656   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:53.866767   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:53.866802   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:53.879693   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:53.879721   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:53.947610   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:53.947640   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:53.947659   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:52.172237   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:54.172434   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:52.890101   66229 machine.go:93] provisionDockerMachine start ...
	I0819 18:15:52.890131   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.890374   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:15:52.892900   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:15:52.893405   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:12:30 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:15:52.893431   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:15:52.893613   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:15:52.893796   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:15:52.893979   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:15:52.894149   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:15:52.894328   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:52.894580   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:15:52.894597   66229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:15:55.789130   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:15:54.376799   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:56.884787   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:56.524639   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:56.537312   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:56.537395   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:56.569913   63216 cri.go:89] found id: ""
	I0819 18:15:56.569958   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.569965   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:56.569972   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:56.570031   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:56.602119   63216 cri.go:89] found id: ""
	I0819 18:15:56.602145   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.602152   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:56.602158   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:56.602211   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:56.634864   63216 cri.go:89] found id: ""
	I0819 18:15:56.634900   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.634910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:56.634920   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:56.634982   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:56.667099   63216 cri.go:89] found id: ""
	I0819 18:15:56.667127   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.667136   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:56.667145   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:56.667194   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:56.703539   63216 cri.go:89] found id: ""
	I0819 18:15:56.703562   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.703571   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:56.703576   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:56.703637   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.734668   63216 cri.go:89] found id: ""
	I0819 18:15:56.734691   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.734698   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:56.734703   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:56.734747   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:56.768840   63216 cri.go:89] found id: ""
	I0819 18:15:56.768866   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.768874   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:56.768880   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:56.768925   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:56.800337   63216 cri.go:89] found id: ""
	I0819 18:15:56.800366   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.800375   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:56.800384   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:56.800398   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:56.866036   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:56.866060   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:56.866072   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:56.955372   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:56.955414   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:57.004450   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:57.004477   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:57.057284   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:57.057320   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.570450   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:59.583640   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:59.583729   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:59.617911   63216 cri.go:89] found id: ""
	I0819 18:15:59.617943   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.617954   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:59.617963   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:59.618014   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:59.650239   63216 cri.go:89] found id: ""
	I0819 18:15:59.650265   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.650274   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:59.650279   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:59.650329   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:59.684877   63216 cri.go:89] found id: ""
	I0819 18:15:59.684902   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.684910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:59.684916   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:59.684977   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:59.717378   63216 cri.go:89] found id: ""
	I0819 18:15:59.717402   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.717414   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:59.717428   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:59.717484   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:59.748937   63216 cri.go:89] found id: ""
	I0819 18:15:59.748968   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.748980   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:59.748989   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:59.749058   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.672222   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:59.171375   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:58.861002   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:15:59.375951   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:01.376193   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:03.376512   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:59.781784   63216 cri.go:89] found id: ""
	I0819 18:15:59.781819   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.781830   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:59.781837   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:59.781899   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:59.815593   63216 cri.go:89] found id: ""
	I0819 18:15:59.815626   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.815637   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:59.815645   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:59.815709   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:59.847540   63216 cri.go:89] found id: ""
	I0819 18:15:59.847571   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.847581   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:59.847595   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:59.847609   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.860256   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:59.860292   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:59.931873   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:59.931900   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:59.931915   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:00.011897   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:00.011938   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:00.047600   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:00.047628   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.599457   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:02.617040   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:02.617112   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:02.658148   63216 cri.go:89] found id: ""
	I0819 18:16:02.658173   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.658181   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:02.658187   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:02.658256   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:02.711833   63216 cri.go:89] found id: ""
	I0819 18:16:02.711873   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.711882   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:02.711889   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:02.711945   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:02.746611   63216 cri.go:89] found id: ""
	I0819 18:16:02.746644   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.746652   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:02.746658   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:02.746712   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:02.781731   63216 cri.go:89] found id: ""
	I0819 18:16:02.781757   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.781764   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:02.781771   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:02.781827   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:02.814215   63216 cri.go:89] found id: ""
	I0819 18:16:02.814242   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.814253   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:02.814260   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:02.814320   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:02.848767   63216 cri.go:89] found id: ""
	I0819 18:16:02.848804   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.848815   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:02.848823   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:02.848881   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:02.882890   63216 cri.go:89] found id: ""
	I0819 18:16:02.882913   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.882920   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:02.882927   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:02.882983   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:02.918333   63216 cri.go:89] found id: ""
	I0819 18:16:02.918362   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.918370   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:02.918393   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:02.918405   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.966994   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:02.967024   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:02.980377   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:02.980437   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:03.045097   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:03.045127   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:03.045145   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:03.126682   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:03.126727   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:01.671492   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:04.171471   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:04.941029   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:05.376677   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:05.376705   62749 pod_ready.go:82] duration metric: took 4m0.006404877s for pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:05.376714   62749 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 18:16:05.376720   62749 pod_ready.go:39] duration metric: took 4m6.335802515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:05.376735   62749 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:16:05.376775   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.376822   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.419678   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:05.419719   62749 cri.go:89] found id: ""
	I0819 18:16:05.419728   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:05.419801   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.424210   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.424271   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.459501   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:05.459527   62749 cri.go:89] found id: ""
	I0819 18:16:05.459535   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:05.459578   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.463654   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.463711   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.497591   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:05.497613   62749 cri.go:89] found id: ""
	I0819 18:16:05.497620   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:05.497667   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.501207   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.501274   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.535112   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:05.535141   62749 cri.go:89] found id: ""
	I0819 18:16:05.535150   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:05.535215   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.538855   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.538909   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.573744   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:05.573769   62749 cri.go:89] found id: ""
	I0819 18:16:05.573776   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:05.573824   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.577981   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.578045   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.616545   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:05.616569   62749 cri.go:89] found id: ""
	I0819 18:16:05.616577   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:05.616630   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.620549   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.620597   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.662743   62749 cri.go:89] found id: ""
	I0819 18:16:05.662781   62749 logs.go:276] 0 containers: []
	W0819 18:16:05.662792   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.662800   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:05.662855   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:05.711433   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:05.711456   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:05.711463   62749 cri.go:89] found id: ""
	I0819 18:16:05.711472   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:05.711536   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.716476   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.720240   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:05.720261   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.261474   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:06.261523   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:06.384895   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:06.384927   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:06.421665   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:06.421700   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:06.461866   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:06.461900   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:06.496543   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:06.496570   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:06.551478   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:06.551518   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:06.586858   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.586886   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.625272   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.625300   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:06.697922   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:06.697960   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:06.711624   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:06.711658   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:06.752648   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:06.752677   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:06.796805   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:06.796836   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:05.662843   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:05.680724   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.680811   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.719205   63216 cri.go:89] found id: ""
	I0819 18:16:05.719227   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.719234   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:05.719240   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.719283   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.764548   63216 cri.go:89] found id: ""
	I0819 18:16:05.764577   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.764587   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:05.764593   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.764644   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.800478   63216 cri.go:89] found id: ""
	I0819 18:16:05.800503   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.800521   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:05.800527   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.800582   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.837403   63216 cri.go:89] found id: ""
	I0819 18:16:05.837432   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.837443   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:05.837450   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.837506   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.869330   63216 cri.go:89] found id: ""
	I0819 18:16:05.869357   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.869367   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:05.869375   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.869463   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.900354   63216 cri.go:89] found id: ""
	I0819 18:16:05.900382   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.900393   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:05.900401   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.900457   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.933899   63216 cri.go:89] found id: ""
	I0819 18:16:05.933926   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.933937   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.933944   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:05.934003   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:05.968393   63216 cri.go:89] found id: ""
	I0819 18:16:05.968421   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.968430   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:05.968441   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:05.968458   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:05.980957   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:05.980988   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:06.045310   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:06.045359   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:06.045375   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.124351   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.124389   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.168102   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.168130   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:08.718499   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:08.731535   63216 kubeadm.go:597] duration metric: took 4m4.252819836s to restartPrimaryControlPlane
	W0819 18:16:08.731622   63216 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:08.731651   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:06.172578   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:08.671110   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:08.013019   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:09.338729   62749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:09.355014   62749 api_server.go:72] duration metric: took 4m18.036977131s to wait for apiserver process to appear ...
	I0819 18:16:09.355046   62749 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:16:09.355086   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:09.355148   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:09.390088   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:09.390107   62749 cri.go:89] found id: ""
	I0819 18:16:09.390115   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:09.390161   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.393972   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:09.394024   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:09.426919   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:09.426943   62749 cri.go:89] found id: ""
	I0819 18:16:09.426953   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:09.427007   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.430685   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:09.430755   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:09.465843   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:09.465867   62749 cri.go:89] found id: ""
	I0819 18:16:09.465876   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:09.465936   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.469990   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:09.470057   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:09.503690   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:09.503716   62749 cri.go:89] found id: ""
	I0819 18:16:09.503727   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:09.503789   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.507731   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:09.507791   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:09.541067   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:09.541098   62749 cri.go:89] found id: ""
	I0819 18:16:09.541108   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:09.541169   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.546503   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:09.546568   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:09.587861   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:09.587888   62749 cri.go:89] found id: ""
	I0819 18:16:09.587898   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:09.587960   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.593765   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:09.593831   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:09.628426   62749 cri.go:89] found id: ""
	I0819 18:16:09.628456   62749 logs.go:276] 0 containers: []
	W0819 18:16:09.628464   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:09.628470   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:09.628529   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:09.666596   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:09.666622   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:09.666628   62749 cri.go:89] found id: ""
	I0819 18:16:09.666636   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:09.666688   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.670929   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.674840   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:09.674863   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:09.708286   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:09.708313   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:09.739212   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:09.739234   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:10.171487   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:10.171535   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:10.208985   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:10.209025   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:10.222001   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:10.222028   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:10.267193   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:10.267225   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:10.300082   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:10.300110   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:10.333403   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:10.333434   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:10.371961   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:10.371989   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:10.425550   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:10.425586   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:10.500742   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:10.500796   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:10.602484   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:10.602518   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:13.149769   62749 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8444/healthz ...
	I0819 18:16:13.154238   62749 api_server.go:279] https://192.168.61.243:8444/healthz returned 200:
	ok
	I0819 18:16:13.155139   62749 api_server.go:141] control plane version: v1.31.0
	I0819 18:16:13.155154   62749 api_server.go:131] duration metric: took 3.800101993s to wait for apiserver health ...
	I0819 18:16:13.155161   62749 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:16:13.155180   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:13.155232   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:13.194723   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:13.194749   62749 cri.go:89] found id: ""
	I0819 18:16:13.194759   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:13.194811   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.198645   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:13.198703   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:13.236332   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:13.236405   62749 cri.go:89] found id: ""
	I0819 18:16:13.236418   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:13.236473   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.240682   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:13.240764   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:13.277257   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:13.277283   62749 cri.go:89] found id: ""
	I0819 18:16:13.277290   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:13.277339   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.281458   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:13.281516   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:13.319419   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:13.319444   62749 cri.go:89] found id: ""
	I0819 18:16:13.319453   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:13.319508   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.323377   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:13.323444   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:13.357320   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:13.357344   62749 cri.go:89] found id: ""
	I0819 18:16:13.357353   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:13.357417   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.361505   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:13.361582   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:13.396379   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:13.396396   62749 cri.go:89] found id: ""
	I0819 18:16:13.396403   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:13.396457   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.400372   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:13.400442   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:13.433520   62749 cri.go:89] found id: ""
	I0819 18:16:13.433551   62749 logs.go:276] 0 containers: []
	W0819 18:16:13.433561   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:13.433569   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:13.433629   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:13.467382   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:13.467411   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:13.467418   62749 cri.go:89] found id: ""
	I0819 18:16:13.467427   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:13.467486   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.471371   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.474905   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:13.474924   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:13.547564   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:13.547596   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:13.593702   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:13.593731   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:13.629610   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:13.629634   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:13.669337   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:13.669372   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:13.729986   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:13.730012   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:13.766424   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:13.766459   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:13.806677   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:13.806702   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:13.540438   63216 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.808760826s)
	I0819 18:16:13.540508   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:13.555141   63216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:16:13.565159   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:16:13.575671   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:16:13.575689   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:16:13.575743   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:16:13.586181   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:16:13.586388   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:16:13.597239   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:16:13.606788   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:16:13.606857   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:16:13.616964   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.627128   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:16:13.627195   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.637263   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:16:13.646834   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:16:13.646898   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:16:13.657566   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:16:13.887585   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:16:11.171886   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:13.672521   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:14.199046   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:14.199103   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:14.213508   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:14.213537   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:14.341980   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:14.342017   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:14.389817   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:14.389853   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:14.425890   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:14.425928   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:16.991182   62749 system_pods.go:59] 8 kube-system pods found
	I0819 18:16:16.991211   62749 system_pods.go:61] "coredns-6f6b679f8f-4jvnz" [d81201db-1102-436b-ac29-dd201584de2d] Running
	I0819 18:16:16.991217   62749 system_pods.go:61] "etcd-default-k8s-diff-port-813424" [c9b60af5-479e-40b8-af56-2b1daef22843] Running
	I0819 18:16:16.991221   62749 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-813424" [b2f825b3-375c-46fb-9714-b0ed0be6ea51] Running
	I0819 18:16:16.991225   62749 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-813424" [c691a1e5-c01d-4bb1-8e37-c542524f9544] Running
	I0819 18:16:16.991229   62749 system_pods.go:61] "kube-proxy-j4x48" [886f5fe5-070e-419c-a9bb-5b95f7496717] Running
	I0819 18:16:16.991232   62749 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-813424" [b7bac76d-1783-40f1-adb2-75174ec8486e] Running
	I0819 18:16:16.991239   62749 system_pods.go:61] "metrics-server-6867b74b74-tp742" [aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:16:16.991243   62749 system_pods.go:61] "storage-provisioner" [658a37e1-39b6-4fa9-8f23-71518ebda8dc] Running
	I0819 18:16:16.991250   62749 system_pods.go:74] duration metric: took 3.836084784s to wait for pod list to return data ...
	I0819 18:16:16.991257   62749 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:16:16.993181   62749 default_sa.go:45] found service account: "default"
	I0819 18:16:16.993201   62749 default_sa.go:55] duration metric: took 1.93729ms for default service account to be created ...
	I0819 18:16:16.993208   62749 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:16:16.997803   62749 system_pods.go:86] 8 kube-system pods found
	I0819 18:16:16.997825   62749 system_pods.go:89] "coredns-6f6b679f8f-4jvnz" [d81201db-1102-436b-ac29-dd201584de2d] Running
	I0819 18:16:16.997830   62749 system_pods.go:89] "etcd-default-k8s-diff-port-813424" [c9b60af5-479e-40b8-af56-2b1daef22843] Running
	I0819 18:16:16.997835   62749 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-813424" [b2f825b3-375c-46fb-9714-b0ed0be6ea51] Running
	I0819 18:16:16.997840   62749 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-813424" [c691a1e5-c01d-4bb1-8e37-c542524f9544] Running
	I0819 18:16:16.997844   62749 system_pods.go:89] "kube-proxy-j4x48" [886f5fe5-070e-419c-a9bb-5b95f7496717] Running
	I0819 18:16:16.997848   62749 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-813424" [b7bac76d-1783-40f1-adb2-75174ec8486e] Running
	I0819 18:16:16.997854   62749 system_pods.go:89] "metrics-server-6867b74b74-tp742" [aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:16:16.997861   62749 system_pods.go:89] "storage-provisioner" [658a37e1-39b6-4fa9-8f23-71518ebda8dc] Running
	I0819 18:16:16.997868   62749 system_pods.go:126] duration metric: took 4.655661ms to wait for k8s-apps to be running ...
	I0819 18:16:16.997877   62749 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:16:16.997917   62749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:17.013524   62749 system_svc.go:56] duration metric: took 15.634104ms WaitForService to wait for kubelet
	I0819 18:16:17.013559   62749 kubeadm.go:582] duration metric: took 4m25.695525816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:16:17.013585   62749 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:16:17.016278   62749 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:16:17.016301   62749 node_conditions.go:123] node cpu capacity is 2
	I0819 18:16:17.016315   62749 node_conditions.go:105] duration metric: took 2.723578ms to run NodePressure ...
	I0819 18:16:17.016326   62749 start.go:241] waiting for startup goroutines ...
	I0819 18:16:17.016336   62749 start.go:246] waiting for cluster config update ...
	I0819 18:16:17.016351   62749 start.go:255] writing updated cluster config ...
	I0819 18:16:17.016817   62749 ssh_runner.go:195] Run: rm -f paused
	I0819 18:16:17.063056   62749 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:16:17.065819   62749 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-813424" cluster and "default" namespace by default
	I0819 18:16:14.093007   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:17.164989   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:16.172074   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:18.670402   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:20.671024   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:22.671462   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:26.288975   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:25.175354   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:27.671452   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:29.671496   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:29.357082   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:31.671726   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:33.672458   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:35.437060   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:36.171920   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:38.172318   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:38.513064   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:40.670687   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:42.670858   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.671276   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.589000   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:47.660996   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:47.171302   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:49.171707   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:51.675414   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:53.665939   62137 pod_ready.go:82] duration metric: took 4m0.001066956s for pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:53.665969   62137 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 18:16:53.665994   62137 pod_ready.go:39] duration metric: took 4m12.464901403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:53.666051   62137 kubeadm.go:597] duration metric: took 4m20.502224967s to restartPrimaryControlPlane
	W0819 18:16:53.666114   62137 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:53.666143   62137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:53.740978   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:56.817027   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:02.892936   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:05.965053   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:12.048961   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:15.116969   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:19.922253   62137 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.256081543s)
	I0819 18:17:19.922334   62137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:19.937012   62137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:17:19.946269   62137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:17:19.955344   62137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:17:19.955363   62137 kubeadm.go:157] found existing configuration files:
	
	I0819 18:17:19.955405   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:17:19.963979   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:17:19.964039   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:17:19.972679   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:17:19.980890   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:17:19.980947   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:17:19.989705   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:17:19.998606   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:17:19.998664   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:17:20.007553   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:17:20.016136   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:17:20.016185   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:17:20.024827   62137 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:17:20.073205   62137 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:17:20.073284   62137 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:17:20.186906   62137 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:17:20.187034   62137 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:17:20.187125   62137 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:17:20.198750   62137 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:17:20.200704   62137 out.go:235]   - Generating certificates and keys ...
	I0819 18:17:20.200810   62137 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:17:20.200905   62137 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:17:20.201015   62137 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:17:20.201099   62137 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:17:20.201202   62137 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:17:20.201279   62137 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:17:20.201370   62137 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:17:20.201468   62137 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:17:20.201578   62137 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:17:20.201686   62137 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:17:20.201743   62137 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:17:20.201823   62137 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:17:20.386866   62137 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:17:20.483991   62137 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:17:20.575440   62137 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:17:20.704349   62137 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:17:20.834890   62137 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:17:20.835583   62137 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:17:20.839290   62137 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:17:21.197002   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:20.841232   62137 out.go:235]   - Booting up control plane ...
	I0819 18:17:20.841313   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:17:20.841374   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:17:20.841428   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:17:20.858185   62137 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:17:20.866369   62137 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:17:20.866447   62137 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:17:20.997302   62137 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:17:20.997435   62137 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:17:21.499506   62137 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041994ms
	I0819 18:17:21.499625   62137 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:17:26.501489   62137 kubeadm.go:310] [api-check] The API server is healthy after 5.002014094s
	I0819 18:17:26.514398   62137 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:17:26.534278   62137 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:17:26.557460   62137 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:17:26.557706   62137 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-233969 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:17:26.569142   62137 kubeadm.go:310] [bootstrap-token] Using token: 2skh80.c6u95wnw3x4gmagv
	I0819 18:17:24.273082   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:26.570814   62137 out.go:235]   - Configuring RBAC rules ...
	I0819 18:17:26.570940   62137 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:17:26.583073   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:17:26.592407   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:17:26.595488   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:17:26.599062   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:17:26.603754   62137 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:17:26.908245   62137 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:17:27.340277   62137 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:17:27.909394   62137 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:17:27.912696   62137 kubeadm.go:310] 
	I0819 18:17:27.912811   62137 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:17:27.912834   62137 kubeadm.go:310] 
	I0819 18:17:27.912953   62137 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:17:27.912965   62137 kubeadm.go:310] 
	I0819 18:17:27.912996   62137 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:17:27.913086   62137 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:17:27.913166   62137 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:17:27.913178   62137 kubeadm.go:310] 
	I0819 18:17:27.913246   62137 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:17:27.913266   62137 kubeadm.go:310] 
	I0819 18:17:27.913338   62137 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:17:27.913349   62137 kubeadm.go:310] 
	I0819 18:17:27.913422   62137 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:17:27.913527   62137 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:17:27.913613   62137 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:17:27.913622   62137 kubeadm.go:310] 
	I0819 18:17:27.913727   62137 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:17:27.913827   62137 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:17:27.913842   62137 kubeadm.go:310] 
	I0819 18:17:27.913934   62137 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2skh80.c6u95wnw3x4gmagv \
	I0819 18:17:27.914073   62137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:17:27.914112   62137 kubeadm.go:310] 	--control-plane 
	I0819 18:17:27.914121   62137 kubeadm.go:310] 
	I0819 18:17:27.914223   62137 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:17:27.914235   62137 kubeadm.go:310] 
	I0819 18:17:27.914353   62137 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2skh80.c6u95wnw3x4gmagv \
	I0819 18:17:27.914499   62137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:17:27.916002   62137 kubeadm.go:310] W0819 18:17:20.045306    3048 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:17:27.916280   62137 kubeadm.go:310] W0819 18:17:20.046268    3048 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:17:27.916390   62137 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:17:27.916417   62137 cni.go:84] Creating CNI manager for ""
	I0819 18:17:27.916426   62137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:17:27.918384   62137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:17:27.919646   62137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:17:27.930298   62137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:17:27.946332   62137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:17:27.946440   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:27.946462   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-233969 minikube.k8s.io/updated_at=2024_08_19T18_17_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=no-preload-233969 minikube.k8s.io/primary=true
	I0819 18:17:27.972836   62137 ops.go:34] apiserver oom_adj: -16
	I0819 18:17:28.134899   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:28.635909   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:29.135326   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:29.635339   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:30.135992   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:30.635626   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:31.135493   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:31.635632   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:32.135812   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:32.208229   62137 kubeadm.go:1113] duration metric: took 4.261865811s to wait for elevateKubeSystemPrivileges
	I0819 18:17:32.208254   62137 kubeadm.go:394] duration metric: took 4m59.094587246s to StartCluster
	I0819 18:17:32.208270   62137 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:17:32.208350   62137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:17:32.210604   62137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:17:32.210888   62137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:17:32.210967   62137 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:17:32.211052   62137 addons.go:69] Setting storage-provisioner=true in profile "no-preload-233969"
	I0819 18:17:32.211070   62137 addons.go:69] Setting default-storageclass=true in profile "no-preload-233969"
	I0819 18:17:32.211088   62137 addons.go:234] Setting addon storage-provisioner=true in "no-preload-233969"
	I0819 18:17:32.211084   62137 addons.go:69] Setting metrics-server=true in profile "no-preload-233969"
	W0819 18:17:32.211096   62137 addons.go:243] addon storage-provisioner should already be in state true
	I0819 18:17:32.211102   62137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-233969"
	I0819 18:17:32.211125   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.211126   62137 addons.go:234] Setting addon metrics-server=true in "no-preload-233969"
	W0819 18:17:32.211166   62137 addons.go:243] addon metrics-server should already be in state true
	I0819 18:17:32.211198   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.211124   62137 config.go:182] Loaded profile config "no-preload-233969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:17:32.211475   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211505   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.211589   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211601   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211619   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.211623   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.212714   62137 out.go:177] * Verifying Kubernetes components...
	I0819 18:17:32.214075   62137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:17:32.227207   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0819 18:17:32.227219   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0819 18:17:32.227615   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.227709   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.228122   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.228142   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.228216   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.228236   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.228543   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.228610   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.229074   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.229112   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.229120   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.229147   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.230316   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0819 18:17:32.230746   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.231408   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.231437   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.231812   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.232018   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.235965   62137 addons.go:234] Setting addon default-storageclass=true in "no-preload-233969"
	W0819 18:17:32.235986   62137 addons.go:243] addon default-storageclass should already be in state true
	I0819 18:17:32.236013   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.236365   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.236392   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.244668   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0819 18:17:32.245056   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.245506   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.245534   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.245816   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0819 18:17:32.245848   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.245989   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.246239   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.246795   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.246811   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.247182   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.247380   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.248517   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.249498   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.250817   62137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 18:17:32.251649   62137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:17:30.348988   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:32.252466   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 18:17:32.252483   62137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 18:17:32.252501   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.253309   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0819 18:17:32.253687   62137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:17:32.253701   62137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:17:32.253717   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.253828   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.254340   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.254352   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.254706   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.255288   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.255324   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.256274   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.256776   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.256796   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.256970   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.257109   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.257229   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.257348   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.257756   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.258132   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.258144   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.258384   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.258531   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.258663   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.258788   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.271706   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0819 18:17:32.272115   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.272558   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.272575   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.272875   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.273041   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.274711   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.274914   62137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:17:32.274924   62137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:17:32.274936   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.277689   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.278191   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.278246   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.278358   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.278533   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.278701   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.278847   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.423546   62137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:17:32.445680   62137 node_ready.go:35] waiting up to 6m0s for node "no-preload-233969" to be "Ready" ...
	I0819 18:17:32.471999   62137 node_ready.go:49] node "no-preload-233969" has status "Ready":"True"
	I0819 18:17:32.472028   62137 node_ready.go:38] duration metric: took 26.307315ms for node "no-preload-233969" to be "Ready" ...
	I0819 18:17:32.472041   62137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:32.478401   62137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:32.518483   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:17:32.568928   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 18:17:32.568953   62137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 18:17:32.592301   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:17:32.645484   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 18:17:32.645513   62137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 18:17:32.715522   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:17:32.715552   62137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 18:17:32.781693   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:17:33.756997   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.238477445s)
	I0819 18:17:33.757035   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757044   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757051   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.164710772s)
	I0819 18:17:33.757088   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757101   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757454   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757450   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757466   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757475   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757483   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757490   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757538   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757564   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757616   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757640   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757712   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757729   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757733   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757852   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757915   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757937   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.831562   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.831588   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.831891   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.831907   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928005   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146269845s)
	I0819 18:17:33.928064   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.928082   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.928391   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.928438   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.928452   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928465   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.928477   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.928809   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.928820   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.928835   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928851   62137 addons.go:475] Verifying addon metrics-server=true in "no-preload-233969"
	I0819 18:17:33.930974   62137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 18:17:33.932101   62137 addons.go:510] duration metric: took 1.72114773s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 18:17:34.486566   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:33.421045   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:36.984891   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:39.484617   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:39.500962   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:42.572983   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:41.990189   62137 pod_ready.go:93] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:41.990210   62137 pod_ready.go:82] duration metric: took 9.511780534s for pod "etcd-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.990221   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.997282   62137 pod_ready.go:93] pod "kube-apiserver-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:41.997301   62137 pod_ready.go:82] duration metric: took 7.074393ms for pod "kube-apiserver-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.997310   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.008757   62137 pod_ready.go:93] pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.008775   62137 pod_ready.go:82] duration metric: took 11.458424ms for pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.008785   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt5nj" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.017802   62137 pod_ready.go:93] pod "kube-proxy-pt5nj" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.017820   62137 pod_ready.go:82] duration metric: took 9.029628ms for pod "kube-proxy-pt5nj" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.017828   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.025402   62137 pod_ready.go:93] pod "kube-scheduler-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.025424   62137 pod_ready.go:82] duration metric: took 7.589229ms for pod "kube-scheduler-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.025433   62137 pod_ready.go:39] duration metric: took 9.553379252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:42.025451   62137 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:17:42.025508   62137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:17:42.043190   62137 api_server.go:72] duration metric: took 9.832267712s to wait for apiserver process to appear ...
	I0819 18:17:42.043214   62137 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:17:42.043231   62137 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I0819 18:17:42.051124   62137 api_server.go:279] https://192.168.50.8:8443/healthz returned 200:
	ok
	I0819 18:17:42.052367   62137 api_server.go:141] control plane version: v1.31.0
	I0819 18:17:42.052392   62137 api_server.go:131] duration metric: took 9.170652ms to wait for apiserver health ...
	I0819 18:17:42.052404   62137 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:17:42.187227   62137 system_pods.go:59] 9 kube-system pods found
	I0819 18:17:42.187254   62137 system_pods.go:61] "coredns-6f6b679f8f-kdrzp" [0db6b602-ca09-40f4-9492-93cbbf919aa7] Running
	I0819 18:17:42.187259   62137 system_pods.go:61] "coredns-6f6b679f8f-vb6dx" [0d39fac8-0d53-4380-a903-080414848e24] Running
	I0819 18:17:42.187263   62137 system_pods.go:61] "etcd-no-preload-233969" [4974eb7f-625a-43fb-b0c4-09cf5b4c0829] Running
	I0819 18:17:42.187267   62137 system_pods.go:61] "kube-apiserver-no-preload-233969" [eb582488-97d8-494c-95ba-6c9e15ff433b] Running
	I0819 18:17:42.187270   62137 system_pods.go:61] "kube-controller-manager-no-preload-233969" [cf978fab-7333-45bb-adc8-127b905707a7] Running
	I0819 18:17:42.187273   62137 system_pods.go:61] "kube-proxy-pt5nj" [dfd68c04-8a56-4a98-a1fb-21a194ebb5e3] Running
	I0819 18:17:42.187277   62137 system_pods.go:61] "kube-scheduler-no-preload-233969" [d08ef733-88fa-43da-92ac-aee974a6c2d2] Running
	I0819 18:17:42.187282   62137 system_pods.go:61] "metrics-server-6867b74b74-bfkkf" [00206622-fe4f-4f26-8f69-ac7fb6a39805] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:17:42.187285   62137 system_pods.go:61] "storage-provisioner" [67a50087-cb20-407b-9a87-03d04d230afb] Running
	I0819 18:17:42.187292   62137 system_pods.go:74] duration metric: took 134.882111ms to wait for pod list to return data ...
	I0819 18:17:42.187299   62137 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:17:42.382612   62137 default_sa.go:45] found service account: "default"
	I0819 18:17:42.382643   62137 default_sa.go:55] duration metric: took 195.337173ms for default service account to be created ...
	I0819 18:17:42.382652   62137 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:17:42.585988   62137 system_pods.go:86] 9 kube-system pods found
	I0819 18:17:42.586024   62137 system_pods.go:89] "coredns-6f6b679f8f-kdrzp" [0db6b602-ca09-40f4-9492-93cbbf919aa7] Running
	I0819 18:17:42.586032   62137 system_pods.go:89] "coredns-6f6b679f8f-vb6dx" [0d39fac8-0d53-4380-a903-080414848e24] Running
	I0819 18:17:42.586038   62137 system_pods.go:89] "etcd-no-preload-233969" [4974eb7f-625a-43fb-b0c4-09cf5b4c0829] Running
	I0819 18:17:42.586044   62137 system_pods.go:89] "kube-apiserver-no-preload-233969" [eb582488-97d8-494c-95ba-6c9e15ff433b] Running
	I0819 18:17:42.586049   62137 system_pods.go:89] "kube-controller-manager-no-preload-233969" [cf978fab-7333-45bb-adc8-127b905707a7] Running
	I0819 18:17:42.586056   62137 system_pods.go:89] "kube-proxy-pt5nj" [dfd68c04-8a56-4a98-a1fb-21a194ebb5e3] Running
	I0819 18:17:42.586062   62137 system_pods.go:89] "kube-scheduler-no-preload-233969" [d08ef733-88fa-43da-92ac-aee974a6c2d2] Running
	I0819 18:17:42.586072   62137 system_pods.go:89] "metrics-server-6867b74b74-bfkkf" [00206622-fe4f-4f26-8f69-ac7fb6a39805] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:17:42.586078   62137 system_pods.go:89] "storage-provisioner" [67a50087-cb20-407b-9a87-03d04d230afb] Running
	I0819 18:17:42.586089   62137 system_pods.go:126] duration metric: took 203.431371ms to wait for k8s-apps to be running ...
	I0819 18:17:42.586101   62137 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:17:42.586154   62137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:42.601268   62137 system_svc.go:56] duration metric: took 15.156104ms WaitForService to wait for kubelet
	I0819 18:17:42.601305   62137 kubeadm.go:582] duration metric: took 10.39038433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:17:42.601330   62137 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:17:42.783030   62137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:17:42.783058   62137 node_conditions.go:123] node cpu capacity is 2
	I0819 18:17:42.783069   62137 node_conditions.go:105] duration metric: took 181.734608ms to run NodePressure ...
	I0819 18:17:42.783080   62137 start.go:241] waiting for startup goroutines ...
	I0819 18:17:42.783087   62137 start.go:246] waiting for cluster config update ...
	I0819 18:17:42.783097   62137 start.go:255] writing updated cluster config ...
	I0819 18:17:42.783349   62137 ssh_runner.go:195] Run: rm -f paused
	I0819 18:17:42.831445   62137 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:17:42.833881   62137 out.go:177] * Done! kubectl is now configured to use "no-preload-233969" cluster and "default" namespace by default
	I0819 18:17:48.653035   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:51.725070   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:57.805043   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:00.881114   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:06.956979   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:09.974002   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:18:09.974108   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:18:09.975602   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:18:09.975650   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:18:09.975736   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:18:09.975861   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:18:09.975993   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:18:09.976086   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:18:09.978023   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:18:09.978100   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:18:09.978157   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:18:09.978230   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:18:09.978281   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:18:09.978358   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:18:09.978408   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:18:09.978466   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:18:09.978529   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:18:09.978645   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:18:09.978758   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:18:09.978816   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:18:09.978890   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:18:09.978973   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:18:09.979046   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:18:09.979138   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:18:09.979191   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:18:09.979339   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:18:09.979438   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:18:09.979503   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:18:09.979595   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:18:10.028995   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:09.981931   63216 out.go:235]   - Booting up control plane ...
	I0819 18:18:09.982014   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:18:09.982087   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:18:09.982142   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:18:09.982213   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:18:09.982378   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:18:09.982432   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:18:09.982491   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982715   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982914   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982996   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983204   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983268   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983424   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983485   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983656   63216 kubeadm.go:310] 
	I0819 18:18:09.983705   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:18:09.983747   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:18:09.983754   63216 kubeadm.go:310] 
	I0819 18:18:09.983788   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:18:09.983818   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:18:09.983957   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:18:09.983982   63216 kubeadm.go:310] 
	I0819 18:18:09.984089   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:18:09.984119   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:18:09.984175   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:18:09.984186   63216 kubeadm.go:310] 
	I0819 18:18:09.984277   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:18:09.984372   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:18:09.984378   63216 kubeadm.go:310] 
	I0819 18:18:09.984474   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:18:09.984552   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:18:09.984621   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:18:09.984699   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:18:09.984762   63216 kubeadm.go:310] 
	W0819 18:18:09.984832   63216 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 18:18:09.984873   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:18:10.439037   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:18:10.453739   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:18:10.463241   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:18:10.463262   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:18:10.463313   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:18:10.472407   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:18:10.472467   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:18:10.481297   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:18:10.489478   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:18:10.489542   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:18:10.498042   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.506373   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:18:10.506433   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.515158   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:18:10.523412   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:18:10.523483   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:18:10.532060   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:18:10.746836   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:18:16.109014   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:19.180970   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:25.261041   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:28.333057   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:34.412966   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:37.485036   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:43.565013   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:46.637059   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:52.716967   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:55.789060   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:01.869005   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:04.941027   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:11.020989   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:14.093067   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:20.173021   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:23.248974   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:29.324961   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:32.397037   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:38.477031   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:41.549001   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:47.629019   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:50.700996   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:56.781035   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:59.853000   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:06.430174   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:20:06.430256   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:20:06.431894   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:20:06.431968   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:20:06.432060   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:20:06.432203   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:20:06.432334   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:20:06.432440   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:20:06.434250   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:20:06.434349   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:20:06.434444   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:20:06.434563   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:20:06.434623   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:20:06.434717   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:20:06.434805   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:20:06.434894   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:20:06.434974   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:20:06.435052   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:20:06.435135   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:20:06.435204   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:20:06.435288   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:20:06.435365   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:20:06.435421   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:20:06.435474   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:20:06.435531   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:20:06.435689   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:20:06.435781   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:20:06.435827   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:20:06.435886   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:20:06.437538   63216 out.go:235]   - Booting up control plane ...
	I0819 18:20:06.437678   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:20:06.437771   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:20:06.437852   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:20:06.437928   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:20:06.438063   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:20:06.438105   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:20:06.438164   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438342   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438416   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438568   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438637   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438821   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438902   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439167   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439264   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439458   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439472   63216 kubeadm.go:310] 
	I0819 18:20:06.439514   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:20:06.439547   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:20:06.439553   63216 kubeadm.go:310] 
	I0819 18:20:06.439583   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:20:06.439626   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:20:06.439732   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:20:06.439749   63216 kubeadm.go:310] 
	I0819 18:20:06.439873   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:20:06.439915   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:20:06.439944   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:20:06.439952   63216 kubeadm.go:310] 
	I0819 18:20:06.440039   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:20:06.440106   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:20:06.440113   63216 kubeadm.go:310] 
	I0819 18:20:06.440252   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:20:06.440329   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:20:06.440392   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:20:06.440458   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:20:06.440521   63216 kubeadm.go:394] duration metric: took 8m2.012853316s to StartCluster
	I0819 18:20:06.440524   63216 kubeadm.go:310] 
	I0819 18:20:06.440559   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:20:06.440610   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:20:06.481255   63216 cri.go:89] found id: ""
	I0819 18:20:06.481285   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.481297   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:20:06.481305   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:20:06.481364   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:20:06.516769   63216 cri.go:89] found id: ""
	I0819 18:20:06.516801   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.516811   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:20:06.516818   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:20:06.516933   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:20:06.551964   63216 cri.go:89] found id: ""
	I0819 18:20:06.551998   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.552006   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:20:06.552014   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:20:06.552108   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:20:06.586084   63216 cri.go:89] found id: ""
	I0819 18:20:06.586115   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.586124   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:20:06.586131   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:20:06.586189   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:20:06.620732   63216 cri.go:89] found id: ""
	I0819 18:20:06.620773   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.620785   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:20:06.620792   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:20:06.620843   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:20:06.659731   63216 cri.go:89] found id: ""
	I0819 18:20:06.659762   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.659772   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:20:06.659780   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:20:06.659846   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:20:06.694223   63216 cri.go:89] found id: ""
	I0819 18:20:06.694257   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.694267   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:20:06.694275   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:20:06.694337   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:20:06.727474   63216 cri.go:89] found id: ""
	I0819 18:20:06.727508   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.727518   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:20:06.727528   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:20:06.727538   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:20:06.778006   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:20:06.778041   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:20:06.792059   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:20:06.792089   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:20:06.863596   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:20:06.863625   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:20:06.863637   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:20:06.979710   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:20:06.979752   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 18:20:07.030879   63216 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 18:20:07.030930   63216 out.go:270] * 
	W0819 18:20:07.031004   63216 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.031025   63216 out.go:270] * 
	W0819 18:20:07.031896   63216 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:20:07.035220   63216 out.go:201] 
	W0819 18:20:07.036384   63216 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.036435   63216 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 18:20:07.036466   63216 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 18:20:07.037783   63216 out.go:201] 
	I0819 18:20:05.933003   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:09.009065   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:15.085040   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:18.160990   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:24.236968   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:27.308959   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:30.310609   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:20:30.310648   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:30.310938   66229 buildroot.go:166] provisioning hostname "embed-certs-306581"
	I0819 18:20:30.310975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:30.311173   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:30.312703   66229 machine.go:96] duration metric: took 4m37.4225796s to provisionDockerMachine
	I0819 18:20:30.312767   66229 fix.go:56] duration metric: took 4m37.446430724s for fixHost
	I0819 18:20:30.312775   66229 start.go:83] releasing machines lock for "embed-certs-306581", held for 4m37.446469265s
	W0819 18:20:30.312789   66229 start.go:714] error starting host: provision: host is not running
	W0819 18:20:30.312878   66229 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 18:20:30.312887   66229 start.go:729] Will try again in 5 seconds ...
	I0819 18:20:35.313124   66229 start.go:360] acquireMachinesLock for embed-certs-306581: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:20:35.313223   66229 start.go:364] duration metric: took 60.186µs to acquireMachinesLock for "embed-certs-306581"
	I0819 18:20:35.313247   66229 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:20:35.313256   66229 fix.go:54] fixHost starting: 
	I0819 18:20:35.313555   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:20:35.313581   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:20:35.330972   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38153
	I0819 18:20:35.331433   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:20:35.331878   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:20:35.331897   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:20:35.332189   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:20:35.332376   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:35.332546   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:20:35.334335   66229 fix.go:112] recreateIfNeeded on embed-certs-306581: state=Stopped err=<nil>
	I0819 18:20:35.334360   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	W0819 18:20:35.334529   66229 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:20:35.336031   66229 out.go:177] * Restarting existing kvm2 VM for "embed-certs-306581" ...
	I0819 18:20:35.337027   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Start
	I0819 18:20:35.337166   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring networks are active...
	I0819 18:20:35.337905   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring network default is active
	I0819 18:20:35.338212   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring network mk-embed-certs-306581 is active
	I0819 18:20:35.338534   66229 main.go:141] libmachine: (embed-certs-306581) Getting domain xml...
	I0819 18:20:35.339265   66229 main.go:141] libmachine: (embed-certs-306581) Creating domain...
	I0819 18:20:36.576142   66229 main.go:141] libmachine: (embed-certs-306581) Waiting to get IP...
	I0819 18:20:36.577067   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:36.577471   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:36.577553   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:36.577459   67882 retry.go:31] will retry after 288.282156ms: waiting for machine to come up
	I0819 18:20:36.866897   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:36.867437   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:36.867507   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:36.867415   67882 retry.go:31] will retry after 357.773556ms: waiting for machine to come up
	I0819 18:20:37.227139   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:37.227672   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:37.227697   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:37.227620   67882 retry.go:31] will retry after 360.777442ms: waiting for machine to come up
	I0819 18:20:37.590245   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:37.590696   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:37.590725   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:37.590672   67882 retry.go:31] will retry after 502.380794ms: waiting for machine to come up
	I0819 18:20:38.094422   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:38.094938   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:38.094963   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:38.094893   67882 retry.go:31] will retry after 716.370935ms: waiting for machine to come up
	I0819 18:20:38.812922   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:38.813416   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:38.813437   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:38.813381   67882 retry.go:31] will retry after 728.320282ms: waiting for machine to come up
	I0819 18:20:39.543316   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:39.543705   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:39.543731   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:39.543668   67882 retry.go:31] will retry after 725.532345ms: waiting for machine to come up
	I0819 18:20:40.270826   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:40.271325   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:40.271347   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:40.271280   67882 retry.go:31] will retry after 1.054064107s: waiting for machine to come up
	I0819 18:20:41.326463   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:41.326952   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:41.326983   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:41.326896   67882 retry.go:31] will retry after 1.258426337s: waiting for machine to come up
	I0819 18:20:42.587252   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:42.587685   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:42.587715   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:42.587645   67882 retry.go:31] will retry after 1.884128664s: waiting for machine to come up
	I0819 18:20:44.474042   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:44.474569   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:44.474592   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:44.474528   67882 retry.go:31] will retry after 2.484981299s: waiting for machine to come up
	I0819 18:20:46.961480   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:46.961991   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:46.962010   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:46.961956   67882 retry.go:31] will retry after 2.912321409s: waiting for machine to come up
	I0819 18:20:49.877938   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:49.878388   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:49.878414   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:49.878347   67882 retry.go:31] will retry after 4.020459132s: waiting for machine to come up
	I0819 18:20:53.901782   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.902239   66229 main.go:141] libmachine: (embed-certs-306581) Found IP for machine: 192.168.72.181
	I0819 18:20:53.902260   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has current primary IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.902266   66229 main.go:141] libmachine: (embed-certs-306581) Reserving static IP address...
	I0819 18:20:53.902757   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "embed-certs-306581", mac: "52:54:00:a4:c5:6a", ip: "192.168.72.181"} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:53.902779   66229 main.go:141] libmachine: (embed-certs-306581) DBG | skip adding static IP to network mk-embed-certs-306581 - found existing host DHCP lease matching {name: "embed-certs-306581", mac: "52:54:00:a4:c5:6a", ip: "192.168.72.181"}
	I0819 18:20:53.902789   66229 main.go:141] libmachine: (embed-certs-306581) Reserved static IP address: 192.168.72.181
	I0819 18:20:53.902800   66229 main.go:141] libmachine: (embed-certs-306581) Waiting for SSH to be available...
	I0819 18:20:53.902808   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Getting to WaitForSSH function...
	I0819 18:20:53.904907   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.905284   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:53.905316   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.905407   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Using SSH client type: external
	I0819 18:20:53.905434   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa (-rw-------)
	I0819 18:20:53.905466   66229 main.go:141] libmachine: (embed-certs-306581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:20:53.905481   66229 main.go:141] libmachine: (embed-certs-306581) DBG | About to run SSH command:
	I0819 18:20:53.905493   66229 main.go:141] libmachine: (embed-certs-306581) DBG | exit 0
	I0819 18:20:54.024614   66229 main.go:141] libmachine: (embed-certs-306581) DBG | SSH cmd err, output: <nil>: 
	I0819 18:20:54.024991   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetConfigRaw
	I0819 18:20:54.025614   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:54.028496   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.028901   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.028935   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.029207   66229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/config.json ...
	I0819 18:20:54.029412   66229 machine.go:93] provisionDockerMachine start ...
	I0819 18:20:54.029430   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:54.029630   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.032073   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.032436   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.032463   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.032647   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.032822   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.033002   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.033136   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.033284   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.033483   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.033498   66229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:20:54.132908   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 18:20:54.132938   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.133214   66229 buildroot.go:166] provisioning hostname "embed-certs-306581"
	I0819 18:20:54.133238   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.133426   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.135967   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.136324   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.136356   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.136507   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.136713   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.136873   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.137028   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.137215   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.137423   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.137437   66229 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-306581 && echo "embed-certs-306581" | sudo tee /etc/hostname
	I0819 18:20:54.250819   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-306581
	
	I0819 18:20:54.250849   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.253776   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.254119   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.254150   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.254351   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.254574   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.254718   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.254872   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.255090   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.255269   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.255286   66229 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-306581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-306581/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-306581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:20:54.361268   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:20:54.361300   66229 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 18:20:54.361328   66229 buildroot.go:174] setting up certificates
	I0819 18:20:54.361342   66229 provision.go:84] configureAuth start
	I0819 18:20:54.361359   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.361630   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:54.364099   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.364511   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.364541   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.364666   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.366912   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.367301   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.367329   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.367447   66229 provision.go:143] copyHostCerts
	I0819 18:20:54.367496   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 18:20:54.367515   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 18:20:54.367586   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 18:20:54.367687   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 18:20:54.367699   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 18:20:54.367737   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 18:20:54.367824   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 18:20:54.367834   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 18:20:54.367860   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 18:20:54.367919   66229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.embed-certs-306581 san=[127.0.0.1 192.168.72.181 embed-certs-306581 localhost minikube]
	I0819 18:20:54.424019   66229 provision.go:177] copyRemoteCerts
	I0819 18:20:54.424075   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:20:54.424096   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.426737   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.426994   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.427016   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.427171   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.427380   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.427523   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.427645   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:54.506517   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:20:54.530454   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 18:20:54.552740   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:20:54.574870   66229 provision.go:87] duration metric: took 213.51055ms to configureAuth
	I0819 18:20:54.574904   66229 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:20:54.575077   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:20:54.575213   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.577946   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.578283   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.578312   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.578484   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.578683   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.578878   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.578993   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.579122   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.579267   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.579281   66229 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:20:54.825788   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:20:54.825815   66229 machine.go:96] duration metric: took 796.390996ms to provisionDockerMachine
	I0819 18:20:54.825826   66229 start.go:293] postStartSetup for "embed-certs-306581" (driver="kvm2")
	I0819 18:20:54.825836   66229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:20:54.825850   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:54.826187   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:20:54.826214   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.829048   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.829433   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.829462   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.829582   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.829819   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.829963   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.830093   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:54.911609   66229 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:20:54.915894   66229 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:20:54.915916   66229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 18:20:54.915979   66229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 18:20:54.916049   66229 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 18:20:54.916134   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:20:54.926185   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:20:54.952362   66229 start.go:296] duration metric: took 126.500839ms for postStartSetup
	I0819 18:20:54.952401   66229 fix.go:56] duration metric: took 19.639145598s for fixHost
	I0819 18:20:54.952420   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.955522   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.955881   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.955909   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.956078   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.956270   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.956450   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.956605   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.956785   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.956940   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.956950   66229 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:20:55.053204   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724091655.030704823
	
	I0819 18:20:55.053229   66229 fix.go:216] guest clock: 1724091655.030704823
	I0819 18:20:55.053237   66229 fix.go:229] Guest: 2024-08-19 18:20:55.030704823 +0000 UTC Remote: 2024-08-19 18:20:54.952405352 +0000 UTC m=+302.228892640 (delta=78.299471ms)
	I0819 18:20:55.053254   66229 fix.go:200] guest clock delta is within tolerance: 78.299471ms
	I0819 18:20:55.053261   66229 start.go:83] releasing machines lock for "embed-certs-306581", held for 19.740028573s
	I0819 18:20:55.053277   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.053530   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:55.056146   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.056523   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.056546   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.056677   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057135   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057320   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057404   66229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:20:55.057445   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:55.057497   66229 ssh_runner.go:195] Run: cat /version.json
	I0819 18:20:55.057518   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:55.059944   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.059969   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060265   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.060296   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060359   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.060407   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060416   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:55.060528   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:55.060605   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:55.060672   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:55.060778   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:55.060838   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:55.060899   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:55.060941   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:55.183438   66229 ssh_runner.go:195] Run: systemctl --version
	I0819 18:20:55.189341   66229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:20:55.330628   66229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:20:55.336807   66229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:20:55.336877   66229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:20:55.351865   66229 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:20:55.351893   66229 start.go:495] detecting cgroup driver to use...
	I0819 18:20:55.351988   66229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:20:55.368983   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:20:55.382795   66229 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:20:55.382848   66229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:20:55.396175   66229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:20:55.409333   66229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:20:55.534054   66229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:20:55.685410   66229 docker.go:233] disabling docker service ...
	I0819 18:20:55.685483   66229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:20:55.699743   66229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:20:55.712425   66229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:20:55.842249   66229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:20:55.964126   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:20:55.978354   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:20:55.995963   66229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:20:55.996032   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.006717   66229 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:20:56.006810   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.017350   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.027098   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.037336   66229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:20:56.047188   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.059128   66229 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.076950   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.087819   66229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:20:56.097922   66229 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:20:56.097980   66229 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:20:56.114569   66229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:20:56.130215   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:20:56.243812   66229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:20:56.376166   66229 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:20:56.376294   66229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:20:56.380916   66229 start.go:563] Will wait 60s for crictl version
	I0819 18:20:56.380973   66229 ssh_runner.go:195] Run: which crictl
	I0819 18:20:56.384492   66229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:20:56.421992   66229 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:20:56.422058   66229 ssh_runner.go:195] Run: crio --version
	I0819 18:20:56.448657   66229 ssh_runner.go:195] Run: crio --version
	I0819 18:20:56.477627   66229 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:20:56.479098   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:56.482364   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:56.482757   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:56.482800   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:56.482997   66229 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 18:20:56.486798   66229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:20:56.498662   66229 kubeadm.go:883] updating cluster {Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:20:56.498820   66229 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:20:56.498890   66229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:20:56.534076   66229 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 18:20:56.534137   66229 ssh_runner.go:195] Run: which lz4
	I0819 18:20:56.537906   66229 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:20:56.541691   66229 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:20:56.541726   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 18:20:57.728202   66229 crio.go:462] duration metric: took 1.190335452s to copy over tarball
	I0819 18:20:57.728263   66229 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:20:59.870389   66229 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.142096936s)
	I0819 18:20:59.870434   66229 crio.go:469] duration metric: took 2.142210052s to extract the tarball
	I0819 18:20:59.870443   66229 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:20:59.907013   66229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:20:59.949224   66229 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:20:59.949244   66229 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:20:59.949257   66229 kubeadm.go:934] updating node { 192.168.72.181 8443 v1.31.0 crio true true} ...
	I0819 18:20:59.949790   66229 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-306581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:20:59.949851   66229 ssh_runner.go:195] Run: crio config
	I0819 18:20:59.993491   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:20:59.993521   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:20:59.993535   66229 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:20:59.993561   66229 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.181 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-306581 NodeName:embed-certs-306581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:20:59.993735   66229 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-306581"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:20:59.993814   66229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:21:00.003488   66229 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:21:00.003563   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:21:00.012546   66229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0819 18:21:00.028546   66229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:21:00.044037   66229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0819 18:21:00.059422   66229 ssh_runner.go:195] Run: grep 192.168.72.181	control-plane.minikube.internal$ /etc/hosts
	I0819 18:21:00.062992   66229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:21:00.075172   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:21:00.213050   66229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:21:00.230086   66229 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581 for IP: 192.168.72.181
	I0819 18:21:00.230114   66229 certs.go:194] generating shared ca certs ...
	I0819 18:21:00.230135   66229 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:21:00.230303   66229 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 18:21:00.230371   66229 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 18:21:00.230386   66229 certs.go:256] generating profile certs ...
	I0819 18:21:00.230506   66229 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/client.key
	I0819 18:21:00.230593   66229 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.key.cf6a9e5e
	I0819 18:21:00.230652   66229 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.key
	I0819 18:21:00.230819   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 18:21:00.230863   66229 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 18:21:00.230877   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:21:00.230912   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:21:00.230951   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:21:00.230985   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 18:21:00.231053   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:21:00.231968   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:21:00.265793   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:21:00.292911   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:21:00.333617   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:21:00.361258   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 18:21:00.394711   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:21:00.417880   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:21:00.440771   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:21:00.464416   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 18:21:00.489641   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 18:21:00.512135   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:21:00.535608   66229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:21:00.552131   66229 ssh_runner.go:195] Run: openssl version
	I0819 18:21:00.557821   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 18:21:00.568710   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.573178   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.573239   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.578820   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:21:00.589649   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:21:00.600652   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.604986   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.605049   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.610552   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:21:00.620514   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 18:21:00.630217   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.634541   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.634599   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.639839   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 18:21:00.649821   66229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:21:00.654288   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:21:00.660071   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:21:00.665354   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:21:00.670791   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:21:00.676451   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:21:00.682099   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:21:00.687792   66229 kubeadm.go:392] StartCluster: {Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:21:00.687869   66229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:21:00.687914   66229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:21:00.730692   66229 cri.go:89] found id: ""
	I0819 18:21:00.730762   66229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:21:00.740607   66229 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 18:21:00.740627   66229 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 18:21:00.740687   66229 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 18:21:00.750127   66229 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:21:00.751927   66229 kubeconfig.go:125] found "embed-certs-306581" server: "https://192.168.72.181:8443"
	I0819 18:21:00.754865   66229 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 18:21:00.764102   66229 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.181
	I0819 18:21:00.764130   66229 kubeadm.go:1160] stopping kube-system containers ...
	I0819 18:21:00.764142   66229 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 18:21:00.764210   66229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:21:00.797866   66229 cri.go:89] found id: ""
	I0819 18:21:00.797939   66229 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 18:21:00.815065   66229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:21:00.824279   66229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:21:00.824297   66229 kubeadm.go:157] found existing configuration files:
	
	I0819 18:21:00.824336   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:21:00.832688   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:21:00.832766   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:21:00.841795   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:21:00.852300   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:21:00.852358   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:21:00.862973   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:21:00.873195   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:21:00.873243   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:21:00.882559   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:21:00.892687   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:21:00.892774   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:21:00.903746   66229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:21:00.913161   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:01.017511   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:01.829503   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.047620   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.105126   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.157817   66229 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:21:02.157927   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:02.658716   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:03.158468   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:03.658865   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:04.157979   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:04.175682   66229 api_server.go:72] duration metric: took 2.017872037s to wait for apiserver process to appear ...
	I0819 18:21:04.175711   66229 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:21:04.175731   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.251226   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 18:21:07.251253   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 18:21:07.251265   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.290762   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 18:21:07.290788   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 18:21:07.676347   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.695167   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:21:07.695220   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:21:08.176382   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:08.183772   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:21:08.183816   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:21:08.676435   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:08.680898   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 200:
	ok
	I0819 18:21:08.686996   66229 api_server.go:141] control plane version: v1.31.0
	I0819 18:21:08.687023   66229 api_server.go:131] duration metric: took 4.511304673s to wait for apiserver health ...
	I0819 18:21:08.687031   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:21:08.687037   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:21:08.688988   66229 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:21:08.690213   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:21:08.701051   66229 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:21:08.719754   66229 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:21:08.732139   66229 system_pods.go:59] 8 kube-system pods found
	I0819 18:21:08.732172   66229 system_pods.go:61] "coredns-6f6b679f8f-222n6" [1d55fb75-011d-4517-8601-b55ff22d0fe1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 18:21:08.732179   66229 system_pods.go:61] "etcd-embed-certs-306581" [0b299b0b-00ec-45d6-9e5f-6f8677734138] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 18:21:08.732187   66229 system_pods.go:61] "kube-apiserver-embed-certs-306581" [c0342f0d-3e9b-4118-abcb-e6585ec8205c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 18:21:08.732192   66229 system_pods.go:61] "kube-controller-manager-embed-certs-306581" [3e8441b3-f3cc-4e0b-9e9b-2dc1fd41ca1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 18:21:08.732196   66229 system_pods.go:61] "kube-proxy-4vt6x" [559e4638-9505-4d7f-b84e-77b813c84ab4] Running
	I0819 18:21:08.732204   66229 system_pods.go:61] "kube-scheduler-embed-certs-306581" [39ec99a8-3e38-40f6-af5e-02a437573bd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 18:21:08.732210   66229 system_pods.go:61] "metrics-server-6867b74b74-dmpfh" [0edd2d8d-aa29-4817-babb-09e185fc0578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:21:08.732213   66229 system_pods.go:61] "storage-provisioner" [f267a05a-418f-49a9-b09d-a6330ffa4abf] Running
	I0819 18:21:08.732219   66229 system_pods.go:74] duration metric: took 12.445292ms to wait for pod list to return data ...
	I0819 18:21:08.732226   66229 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:21:08.735979   66229 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:21:08.736004   66229 node_conditions.go:123] node cpu capacity is 2
	I0819 18:21:08.736015   66229 node_conditions.go:105] duration metric: took 3.784963ms to run NodePressure ...
	I0819 18:21:08.736029   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:08.995746   66229 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 18:21:09.001567   66229 kubeadm.go:739] kubelet initialised
	I0819 18:21:09.001592   66229 kubeadm.go:740] duration metric: took 5.816928ms waiting for restarted kubelet to initialise ...
	I0819 18:21:09.001603   66229 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:21:09.006253   66229 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:11.015091   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:13.512551   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:15.512696   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:16.513342   66229 pod_ready.go:93] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:16.513387   66229 pod_ready.go:82] duration metric: took 7.507092015s for pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:16.513404   66229 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.519842   66229 pod_ready.go:93] pod "etcd-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:17.519864   66229 pod_ready.go:82] duration metric: took 1.006452738s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.519873   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.524383   66229 pod_ready.go:93] pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:17.524401   66229 pod_ready.go:82] duration metric: took 4.522465ms for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.524411   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:19.536012   66229 pod_ready.go:103] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:22.030530   66229 pod_ready.go:103] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:23.530792   66229 pod_ready.go:93] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.530818   66229 pod_ready.go:82] duration metric: took 6.006401322s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.530828   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4vt6x" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.535011   66229 pod_ready.go:93] pod "kube-proxy-4vt6x" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.535030   66229 pod_ready.go:82] duration metric: took 4.196825ms for pod "kube-proxy-4vt6x" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.535038   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.538712   66229 pod_ready.go:93] pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.538731   66229 pod_ready.go:82] duration metric: took 3.686091ms for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.538743   66229 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:25.545068   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:28.044531   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:30.044724   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:32.545647   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:35.044620   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:37.044937   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:39.045319   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:41.545155   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:43.545946   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:46.045829   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:48.544436   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:50.546582   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:53.045122   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:55.544595   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:57.544701   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:00.044887   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:02.044950   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:04.544241   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:06.546130   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:09.044418   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:11.045634   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:13.545020   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:16.045408   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:18.544890   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:21.044294   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:23.045251   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:25.545598   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:27.545703   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:30.044377   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:32.045041   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:34.045316   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:36.045466   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:38.543870   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:40.544216   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:42.545271   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:45.044619   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:47.045364   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:49.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:51.045992   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:53.544682   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:56.045091   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:58.045324   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:00.046083   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:02.545541   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:05.045078   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:07.544235   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:09.545586   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:12.045449   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:14.545054   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:16.545253   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:19.044239   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:21.045012   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:23.045831   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:25.545703   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:28.045069   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:30.045417   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:32.545986   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:35.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:37.545427   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:39.545715   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:42.046173   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:44.545426   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:46.545560   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:48.546489   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:51.044803   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:53.044925   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:55.544871   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:57.545044   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:00.044157   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:02.045599   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:04.546054   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:07.044956   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:09.044993   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:11.045233   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:13.046097   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:15.046223   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:17.544258   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:19.545890   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:22.044892   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:24.045926   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:26.545100   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:29.044231   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:31.044942   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:33.545660   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:36.045482   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:38.545467   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:40.545731   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:43.045524   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:45.545299   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:48.044040   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:50.044556   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:52.046009   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:54.545370   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:57.044344   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:59.544590   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:02.045528   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:04.546831   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:07.045865   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:09.544718   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:12.044142   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:14.045777   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:16.048107   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:18.545087   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:21.044910   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:23.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:23.539885   66229 pod_ready.go:82] duration metric: took 4m0.001128118s for pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace to be "Ready" ...
	E0819 18:25:23.539910   66229 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 18:25:23.539927   66229 pod_ready.go:39] duration metric: took 4m14.538313663s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:25:23.539953   66229 kubeadm.go:597] duration metric: took 4m22.799312728s to restartPrimaryControlPlane
	W0819 18:25:23.540007   66229 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:25:23.540040   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:25:49.757089   66229 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.217024974s)
	I0819 18:25:49.757162   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:25:49.771550   66229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:25:49.780916   66229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:25:49.789732   66229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:25:49.789751   66229 kubeadm.go:157] found existing configuration files:
	
	I0819 18:25:49.789796   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:25:49.798373   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:25:49.798436   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:25:49.807148   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:25:49.815466   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:25:49.815528   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:25:49.824320   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:25:49.832472   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:25:49.832523   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:25:49.841050   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:25:49.849186   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:25:49.849243   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:25:49.857711   66229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:25:49.904029   66229 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:25:49.904211   66229 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:25:50.021095   66229 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:25:50.021242   66229 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:25:50.021399   66229 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:25:50.031925   66229 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:25:50.033989   66229 out.go:235]   - Generating certificates and keys ...
	I0819 18:25:50.034080   66229 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:25:50.034163   66229 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:25:50.034236   66229 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:25:50.034287   66229 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:25:50.034345   66229 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:25:50.034392   66229 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:25:50.034460   66229 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:25:50.034568   66229 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:25:50.034679   66229 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:25:50.034796   66229 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:25:50.034869   66229 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:25:50.034950   66229 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:25:50.135488   66229 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:25:50.189286   66229 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:25:50.602494   66229 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:25:50.752478   66229 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:25:51.009355   66229 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:25:51.009947   66229 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:25:51.012443   66229 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:25:51.014364   66229 out.go:235]   - Booting up control plane ...
	I0819 18:25:51.014506   66229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:25:51.014618   66229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:25:51.014884   66229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:25:51.033153   66229 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:25:51.040146   66229 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:25:51.040228   66229 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:25:51.167821   66229 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:25:51.167952   66229 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:25:52.171536   66229 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003657825s
	I0819 18:25:52.171661   66229 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:25:56.673902   66229 kubeadm.go:310] [api-check] The API server is healthy after 4.502200468s
	I0819 18:25:56.700202   66229 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:25:56.718381   66229 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:25:56.745000   66229 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:25:56.745278   66229 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-306581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:25:56.759094   66229 kubeadm.go:310] [bootstrap-token] Using token: abvjrz.7whl2a0axm001wrp
	I0819 18:25:56.760573   66229 out.go:235]   - Configuring RBAC rules ...
	I0819 18:25:56.760724   66229 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:25:56.766575   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:25:56.780740   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:25:56.784467   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:25:56.788245   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:25:56.792110   66229 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:25:57.088316   66229 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:25:57.528128   66229 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:25:58.088280   66229 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:25:58.088324   66229 kubeadm.go:310] 
	I0819 18:25:58.088398   66229 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:25:58.088425   66229 kubeadm.go:310] 
	I0819 18:25:58.088559   66229 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:25:58.088585   66229 kubeadm.go:310] 
	I0819 18:25:58.088633   66229 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:25:58.088726   66229 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:25:58.088883   66229 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:25:58.088904   66229 kubeadm.go:310] 
	I0819 18:25:58.088983   66229 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:25:58.088996   66229 kubeadm.go:310] 
	I0819 18:25:58.089083   66229 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:25:58.089109   66229 kubeadm.go:310] 
	I0819 18:25:58.089185   66229 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:25:58.089294   66229 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:25:58.089419   66229 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:25:58.089441   66229 kubeadm.go:310] 
	I0819 18:25:58.089557   66229 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:25:58.089669   66229 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:25:58.089681   66229 kubeadm.go:310] 
	I0819 18:25:58.089798   66229 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token abvjrz.7whl2a0axm001wrp \
	I0819 18:25:58.089961   66229 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:25:58.089995   66229 kubeadm.go:310] 	--control-plane 
	I0819 18:25:58.090005   66229 kubeadm.go:310] 
	I0819 18:25:58.090134   66229 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:25:58.090146   66229 kubeadm.go:310] 
	I0819 18:25:58.090270   66229 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token abvjrz.7whl2a0axm001wrp \
	I0819 18:25:58.090418   66229 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:25:58.091186   66229 kubeadm.go:310] W0819 18:25:49.877896    2533 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:25:58.091610   66229 kubeadm.go:310] W0819 18:25:49.879026    2533 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:25:58.091792   66229 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:25:58.091814   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:25:58.091824   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:25:58.093554   66229 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:25:58.094739   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:25:58.105125   66229 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:25:58.123435   66229 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:25:58.123526   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:58.123532   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-306581 minikube.k8s.io/updated_at=2024_08_19T18_25_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=embed-certs-306581 minikube.k8s.io/primary=true
	I0819 18:25:58.148101   66229 ops.go:34] apiserver oom_adj: -16
	I0819 18:25:58.298505   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:58.799549   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:59.299523   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:59.798660   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:00.299282   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:00.799040   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:01.298647   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:01.798822   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:02.299035   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:02.798965   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:02.914076   66229 kubeadm.go:1113] duration metric: took 4.790608101s to wait for elevateKubeSystemPrivileges
	I0819 18:26:02.914111   66229 kubeadm.go:394] duration metric: took 5m2.226323065s to StartCluster
	I0819 18:26:02.914132   66229 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:26:02.914214   66229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:26:02.915798   66229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:26:02.916048   66229 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:26:02.916134   66229 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:26:02.916258   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:26:02.916269   66229 addons.go:69] Setting default-storageclass=true in profile "embed-certs-306581"
	I0819 18:26:02.916257   66229 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-306581"
	I0819 18:26:02.916310   66229 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-306581"
	I0819 18:26:02.916342   66229 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-306581"
	I0819 18:26:02.916344   66229 addons.go:69] Setting metrics-server=true in profile "embed-certs-306581"
	W0819 18:26:02.916356   66229 addons.go:243] addon storage-provisioner should already be in state true
	I0819 18:26:02.916376   66229 addons.go:234] Setting addon metrics-server=true in "embed-certs-306581"
	I0819 18:26:02.916382   66229 host.go:66] Checking if "embed-certs-306581" exists ...
	W0819 18:26:02.916389   66229 addons.go:243] addon metrics-server should already be in state true
	I0819 18:26:02.916427   66229 host.go:66] Checking if "embed-certs-306581" exists ...
	I0819 18:26:02.916763   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.916775   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.916792   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.916805   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.916827   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.916852   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.918733   66229 out.go:177] * Verifying Kubernetes components...
	I0819 18:26:02.920207   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:26:02.936535   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I0819 18:26:02.936877   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41383
	I0819 18:26:02.937025   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39935
	I0819 18:26:02.937128   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.937375   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.937485   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.937675   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.937698   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.937939   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.937951   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.937960   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.937965   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.938038   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.938285   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.938328   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.938442   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.938611   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.938640   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.938821   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.938859   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.942730   66229 addons.go:234] Setting addon default-storageclass=true in "embed-certs-306581"
	W0819 18:26:02.942783   66229 addons.go:243] addon default-storageclass should already be in state true
	I0819 18:26:02.942825   66229 host.go:66] Checking if "embed-certs-306581" exists ...
	I0819 18:26:02.945808   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.945841   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.959554   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I0819 18:26:02.959555   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0819 18:26:02.959950   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.960062   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.960479   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.960499   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.960634   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.960650   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.960793   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I0819 18:26:02.960976   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.961044   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.961090   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.961157   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.961205   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.961550   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.961571   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.961889   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.962444   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.962471   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.963100   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:26:02.963295   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:26:02.965320   66229 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:26:02.965389   66229 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 18:26:02.966795   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 18:26:02.966816   66229 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 18:26:02.966835   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:26:02.966935   66229 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:26:02.966956   66229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:26:02.966975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:26:02.970428   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.970527   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.970751   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:26:02.970771   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.971025   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:26:02.971047   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.971053   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:26:02.971198   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:26:02.971210   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:26:02.971364   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:26:02.971407   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:26:02.971526   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:26:02.971577   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:26:02.971704   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:26:02.978868   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0819 18:26:02.979249   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.979716   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.979734   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.980120   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.980329   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.982092   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:26:02.982322   66229 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:26:02.982337   66229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:26:02.982356   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:26:02.984740   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.985154   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:26:02.985175   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.985411   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:26:02.985583   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:26:02.985734   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:26:02.985861   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:26:03.159722   66229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:26:03.200632   66229 node_ready.go:35] waiting up to 6m0s for node "embed-certs-306581" to be "Ready" ...
	I0819 18:26:03.208989   66229 node_ready.go:49] node "embed-certs-306581" has status "Ready":"True"
	I0819 18:26:03.209020   66229 node_ready.go:38] duration metric: took 8.358821ms for node "embed-certs-306581" to be "Ready" ...
	I0819 18:26:03.209031   66229 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:26:03.215374   66229 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:03.293861   66229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:26:03.295078   66229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:26:03.362999   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 18:26:03.363021   66229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 18:26:03.455443   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 18:26:03.455471   66229 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 18:26:03.525137   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:26:03.525167   66229 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 18:26:03.594219   66229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:26:03.707027   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:03.707054   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:03.707419   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:03.707510   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:03.707526   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:03.707540   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:03.707551   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:03.707815   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:03.707863   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:03.707866   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:03.731452   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:03.731476   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:03.731752   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:03.731766   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:03.731774   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.521921   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.521943   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.522255   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:04.522325   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.522338   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.522347   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.522369   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.522422   66229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227312769s)
	I0819 18:26:04.522461   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.522472   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.522548   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.522564   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.522574   66229 addons.go:475] Verifying addon metrics-server=true in "embed-certs-306581"
	I0819 18:26:04.523854   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:04.523859   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.523882   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.523899   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.523911   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.524115   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.524134   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.525754   66229 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0819 18:26:04.527292   66229 addons.go:510] duration metric: took 1.611171518s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0819 18:26:05.222505   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace has status "Ready":"False"
	I0819 18:26:06.222480   66229 pod_ready.go:93] pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.222511   66229 pod_ready.go:82] duration metric: took 3.00710581s for pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.222523   66229 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-j764j" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.229629   66229 pod_ready.go:93] pod "coredns-6f6b679f8f-j764j" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.229653   66229 pod_ready.go:82] duration metric: took 7.122956ms for pod "coredns-6f6b679f8f-j764j" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.229663   66229 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.234474   66229 pod_ready.go:93] pod "etcd-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.234497   66229 pod_ready.go:82] duration metric: took 4.828007ms for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.234510   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.239097   66229 pod_ready.go:93] pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.239114   66229 pod_ready.go:82] duration metric: took 4.596493ms for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.239123   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.745125   66229 pod_ready.go:93] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.745148   66229 pod_ready.go:82] duration metric: took 506.019468ms for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.745160   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-df5kf" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.019557   66229 pod_ready.go:93] pod "kube-proxy-df5kf" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:07.019594   66229 pod_ready.go:82] duration metric: took 274.427237ms for pod "kube-proxy-df5kf" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.019608   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.418650   66229 pod_ready.go:93] pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:07.418675   66229 pod_ready.go:82] duration metric: took 399.060317ms for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.418683   66229 pod_ready.go:39] duration metric: took 4.209640554s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:26:07.418696   66229 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:26:07.418742   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:26:07.434205   66229 api_server.go:72] duration metric: took 4.518122629s to wait for apiserver process to appear ...
	I0819 18:26:07.434229   66229 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:26:07.434245   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:26:07.438540   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 200:
	ok
	I0819 18:26:07.439633   66229 api_server.go:141] control plane version: v1.31.0
	I0819 18:26:07.439654   66229 api_server.go:131] duration metric: took 5.418424ms to wait for apiserver health ...
	I0819 18:26:07.439664   66229 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:26:07.622538   66229 system_pods.go:59] 9 kube-system pods found
	I0819 18:26:07.622567   66229 system_pods.go:61] "coredns-6f6b679f8f-274qq" [af408da7-683b-4730-b836-a5ae446e84d4] Running
	I0819 18:26:07.622575   66229 system_pods.go:61] "coredns-6f6b679f8f-j764j" [726e772d-dd20-4427-b8b2-40422b5be1ef] Running
	I0819 18:26:07.622580   66229 system_pods.go:61] "etcd-embed-certs-306581" [291235bc-9e42-4982-93c4-d77a0116a9ed] Running
	I0819 18:26:07.622583   66229 system_pods.go:61] "kube-apiserver-embed-certs-306581" [2068ba5f-ea2d-4b99-87e4-2c9d16861cd4] Running
	I0819 18:26:07.622587   66229 system_pods.go:61] "kube-controller-manager-embed-certs-306581" [057adac9-1819-4c28-8bdd-4b95cf4dd33f] Running
	I0819 18:26:07.622590   66229 system_pods.go:61] "kube-proxy-df5kf" [0f004f8f-d49f-468e-acac-a7d691c9cdba] Running
	I0819 18:26:07.622594   66229 system_pods.go:61] "kube-scheduler-embed-certs-306581" [58a0610a-0718-4151-8e0b-bf9dd0e7864a] Running
	I0819 18:26:07.622600   66229 system_pods.go:61] "metrics-server-6867b74b74-j8qbw" [6c7ec046-01e2-4903-9937-c79aabc81bb2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:26:07.622604   66229 system_pods.go:61] "storage-provisioner" [26d63f30-45fd-48f4-973e-6a72cf931b9d] Running
	I0819 18:26:07.622611   66229 system_pods.go:74] duration metric: took 182.941942ms to wait for pod list to return data ...
	I0819 18:26:07.622619   66229 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:26:07.820899   66229 default_sa.go:45] found service account: "default"
	I0819 18:26:07.820924   66229 default_sa.go:55] duration metric: took 198.300082ms for default service account to be created ...
	I0819 18:26:07.820934   66229 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:26:08.021777   66229 system_pods.go:86] 9 kube-system pods found
	I0819 18:26:08.021803   66229 system_pods.go:89] "coredns-6f6b679f8f-274qq" [af408da7-683b-4730-b836-a5ae446e84d4] Running
	I0819 18:26:08.021809   66229 system_pods.go:89] "coredns-6f6b679f8f-j764j" [726e772d-dd20-4427-b8b2-40422b5be1ef] Running
	I0819 18:26:08.021813   66229 system_pods.go:89] "etcd-embed-certs-306581" [291235bc-9e42-4982-93c4-d77a0116a9ed] Running
	I0819 18:26:08.021817   66229 system_pods.go:89] "kube-apiserver-embed-certs-306581" [2068ba5f-ea2d-4b99-87e4-2c9d16861cd4] Running
	I0819 18:26:08.021820   66229 system_pods.go:89] "kube-controller-manager-embed-certs-306581" [057adac9-1819-4c28-8bdd-4b95cf4dd33f] Running
	I0819 18:26:08.021825   66229 system_pods.go:89] "kube-proxy-df5kf" [0f004f8f-d49f-468e-acac-a7d691c9cdba] Running
	I0819 18:26:08.021829   66229 system_pods.go:89] "kube-scheduler-embed-certs-306581" [58a0610a-0718-4151-8e0b-bf9dd0e7864a] Running
	I0819 18:26:08.021836   66229 system_pods.go:89] "metrics-server-6867b74b74-j8qbw" [6c7ec046-01e2-4903-9937-c79aabc81bb2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:26:08.021840   66229 system_pods.go:89] "storage-provisioner" [26d63f30-45fd-48f4-973e-6a72cf931b9d] Running
	I0819 18:26:08.021847   66229 system_pods.go:126] duration metric: took 200.908452ms to wait for k8s-apps to be running ...
	I0819 18:26:08.021853   66229 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:26:08.021896   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:26:08.035873   66229 system_svc.go:56] duration metric: took 14.008336ms WaitForService to wait for kubelet
	I0819 18:26:08.035902   66229 kubeadm.go:582] duration metric: took 5.119824696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:26:08.035928   66229 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:26:08.219981   66229 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:26:08.220005   66229 node_conditions.go:123] node cpu capacity is 2
	I0819 18:26:08.220016   66229 node_conditions.go:105] duration metric: took 184.083094ms to run NodePressure ...
	I0819 18:26:08.220025   66229 start.go:241] waiting for startup goroutines ...
	I0819 18:26:08.220032   66229 start.go:246] waiting for cluster config update ...
	I0819 18:26:08.220041   66229 start.go:255] writing updated cluster config ...
	I0819 18:26:08.220295   66229 ssh_runner.go:195] Run: rm -f paused
	I0819 18:26:08.267438   66229 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:26:08.269435   66229 out.go:177] * Done! kubectl is now configured to use "embed-certs-306581" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.013917533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092004013882387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17938509-ef36-4655-b5fa-f95f6e9be7e0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.014832408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13852ee0-4301-4a9b-b4c4-5b7ac95233f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.014886532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13852ee0-4301-4a9b-b4c4-5b7ac95233f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.015167768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a784011c1630fa737c01f73a433114678f21908c6f206e3bb584867304b1c1,PodSandboxId:b4d5818be915b99aca7ccd2c37fef2ad0ccb06443ddada2cd1bd46dd3dc1de38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091454193397783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a50087-cb20-407b-9a87-03d04d230afb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77567c11d5611c1a2c969b8a6fc34055e69a489fbc0107f5e8a20b37da5805c5,PodSandboxId:37767a2eba14bf0fcd760800859ba6d199f9b9e437a1ecb9a59ee974581f4f7d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453555346342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kdrzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db6b602-ca09-40f4-9492-93cbbf919aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8561dfaa22d9dbfaad3595ea21ed0d67f5cf819327f2f363df9e3ea60b4dcc1c,PodSandboxId:c6ad35e9012be2dea475046d15ad05506df9b9212927b7e72670205b59323451,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453465869004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vb6dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d
39fac8-0d53-4380-a903-080414848e24,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fa5dfbb43c52ad39b74aed140c756006f0f11c9a69472d8b92addff0d64b08c,PodSandboxId:2aee12971ae2867607209d69cecb9ae7f33f29a54108669835a23500837ea191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724091452901276652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt5nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd68c04-8a56-4a98-a1fb-21a194ebb5e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6e79f7543342fcc9df91a5421220a96a2eafaf71f5e76e68e905f2c8cea28e,PodSandboxId:a588363c1a4b3a4dc2693e3989d453399725b29e976bedbec900f1f7fddcd5fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091442066204690,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cfc5bf79fdfe549dd37044fd3c5166,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72417b0564131963fc9fb1346896cb455214e683357c704db318a8e7da63198,PodSandboxId:45067d098f0250c360e0a5c565093de48f05ccc6a2d5287faba18c94416613fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091442031130486,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72358b243ceb044a02ff12f886e757,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6011dd9bf6ffffd9fe22a5c5037c57e099dba666794b9a288f314e36ab7dbd,PodSandboxId:5afb929a542e01a963ef67798b3ad1c0c49db77eb44f9304c9293a3f0e1298c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091441983131010,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155f37c341f82292331395fcf8d5a6e110c77aefa45e4e23003310f74d179812,PodSandboxId:f6d7bbca3f21c8665430018c644312239b044a01a1f2ebc0f9a44375c683ada6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091441916728734,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e12b21a98fb74bba8dd48a2456ee75b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e071aa0b0c8a06fb2fa644cf6450b7ba130899af2603d895c947e6291562ca,PodSandboxId:9eb1a6b8f20d4c1ad23b967067caf781c24cc1cf19f2c34b16c33f942c69ac21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091155834445359,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13852ee0-4301-4a9b-b4c4-5b7ac95233f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.051273498Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7fd47666-dfc6-4b7c-8a31-408474ed0808 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.051363892Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7fd47666-dfc6-4b7c-8a31-408474ed0808 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.052319331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c47a6a08-f74f-47bd-b967-f5bd5726c5f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.052719893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092004052695995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c47a6a08-f74f-47bd-b967-f5bd5726c5f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.053208834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c8889d5-6197-4432-a8c3-dafed0da5882 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.053269065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c8889d5-6197-4432-a8c3-dafed0da5882 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.053460309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a784011c1630fa737c01f73a433114678f21908c6f206e3bb584867304b1c1,PodSandboxId:b4d5818be915b99aca7ccd2c37fef2ad0ccb06443ddada2cd1bd46dd3dc1de38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091454193397783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a50087-cb20-407b-9a87-03d04d230afb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77567c11d5611c1a2c969b8a6fc34055e69a489fbc0107f5e8a20b37da5805c5,PodSandboxId:37767a2eba14bf0fcd760800859ba6d199f9b9e437a1ecb9a59ee974581f4f7d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453555346342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kdrzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db6b602-ca09-40f4-9492-93cbbf919aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8561dfaa22d9dbfaad3595ea21ed0d67f5cf819327f2f363df9e3ea60b4dcc1c,PodSandboxId:c6ad35e9012be2dea475046d15ad05506df9b9212927b7e72670205b59323451,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453465869004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vb6dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d
39fac8-0d53-4380-a903-080414848e24,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fa5dfbb43c52ad39b74aed140c756006f0f11c9a69472d8b92addff0d64b08c,PodSandboxId:2aee12971ae2867607209d69cecb9ae7f33f29a54108669835a23500837ea191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724091452901276652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt5nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd68c04-8a56-4a98-a1fb-21a194ebb5e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6e79f7543342fcc9df91a5421220a96a2eafaf71f5e76e68e905f2c8cea28e,PodSandboxId:a588363c1a4b3a4dc2693e3989d453399725b29e976bedbec900f1f7fddcd5fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091442066204690,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cfc5bf79fdfe549dd37044fd3c5166,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72417b0564131963fc9fb1346896cb455214e683357c704db318a8e7da63198,PodSandboxId:45067d098f0250c360e0a5c565093de48f05ccc6a2d5287faba18c94416613fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091442031130486,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72358b243ceb044a02ff12f886e757,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6011dd9bf6ffffd9fe22a5c5037c57e099dba666794b9a288f314e36ab7dbd,PodSandboxId:5afb929a542e01a963ef67798b3ad1c0c49db77eb44f9304c9293a3f0e1298c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091441983131010,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155f37c341f82292331395fcf8d5a6e110c77aefa45e4e23003310f74d179812,PodSandboxId:f6d7bbca3f21c8665430018c644312239b044a01a1f2ebc0f9a44375c683ada6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091441916728734,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e12b21a98fb74bba8dd48a2456ee75b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e071aa0b0c8a06fb2fa644cf6450b7ba130899af2603d895c947e6291562ca,PodSandboxId:9eb1a6b8f20d4c1ad23b967067caf781c24cc1cf19f2c34b16c33f942c69ac21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091155834445359,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c8889d5-6197-4432-a8c3-dafed0da5882 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.097686048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25af8b72-d14e-4ca7-82d8-fa911ba420ec name=/runtime.v1.RuntimeService/Version
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.097770524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25af8b72-d14e-4ca7-82d8-fa911ba420ec name=/runtime.v1.RuntimeService/Version
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.098658848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab89552a-c010-4457-a285-526792c13a13 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.099181360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092004099156093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab89552a-c010-4457-a285-526792c13a13 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.099669168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1cc45fc-66ef-4bbc-80a9-6978b556ffda name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.099721835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1cc45fc-66ef-4bbc-80a9-6978b556ffda name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.099916904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a784011c1630fa737c01f73a433114678f21908c6f206e3bb584867304b1c1,PodSandboxId:b4d5818be915b99aca7ccd2c37fef2ad0ccb06443ddada2cd1bd46dd3dc1de38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091454193397783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a50087-cb20-407b-9a87-03d04d230afb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77567c11d5611c1a2c969b8a6fc34055e69a489fbc0107f5e8a20b37da5805c5,PodSandboxId:37767a2eba14bf0fcd760800859ba6d199f9b9e437a1ecb9a59ee974581f4f7d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453555346342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kdrzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db6b602-ca09-40f4-9492-93cbbf919aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8561dfaa22d9dbfaad3595ea21ed0d67f5cf819327f2f363df9e3ea60b4dcc1c,PodSandboxId:c6ad35e9012be2dea475046d15ad05506df9b9212927b7e72670205b59323451,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453465869004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vb6dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d
39fac8-0d53-4380-a903-080414848e24,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fa5dfbb43c52ad39b74aed140c756006f0f11c9a69472d8b92addff0d64b08c,PodSandboxId:2aee12971ae2867607209d69cecb9ae7f33f29a54108669835a23500837ea191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724091452901276652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt5nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd68c04-8a56-4a98-a1fb-21a194ebb5e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6e79f7543342fcc9df91a5421220a96a2eafaf71f5e76e68e905f2c8cea28e,PodSandboxId:a588363c1a4b3a4dc2693e3989d453399725b29e976bedbec900f1f7fddcd5fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091442066204690,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cfc5bf79fdfe549dd37044fd3c5166,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72417b0564131963fc9fb1346896cb455214e683357c704db318a8e7da63198,PodSandboxId:45067d098f0250c360e0a5c565093de48f05ccc6a2d5287faba18c94416613fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091442031130486,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72358b243ceb044a02ff12f886e757,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6011dd9bf6ffffd9fe22a5c5037c57e099dba666794b9a288f314e36ab7dbd,PodSandboxId:5afb929a542e01a963ef67798b3ad1c0c49db77eb44f9304c9293a3f0e1298c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091441983131010,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155f37c341f82292331395fcf8d5a6e110c77aefa45e4e23003310f74d179812,PodSandboxId:f6d7bbca3f21c8665430018c644312239b044a01a1f2ebc0f9a44375c683ada6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091441916728734,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e12b21a98fb74bba8dd48a2456ee75b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e071aa0b0c8a06fb2fa644cf6450b7ba130899af2603d895c947e6291562ca,PodSandboxId:9eb1a6b8f20d4c1ad23b967067caf781c24cc1cf19f2c34b16c33f942c69ac21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091155834445359,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1cc45fc-66ef-4bbc-80a9-6978b556ffda name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.136872028Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a60a9ad2-457a-4403-974d-534b8e66c9c6 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.137141931Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a60a9ad2-457a-4403-974d-534b8e66c9c6 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.138747588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c08e02a-ee3a-4b9e-8114-edec9dac7510 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.139324729Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092004139298533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c08e02a-ee3a-4b9e-8114-edec9dac7510 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.140309087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f2845dc-6852-4abf-8f8e-b25420f2caf3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.140381338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f2845dc-6852-4abf-8f8e-b25420f2caf3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:26:44 no-preload-233969 crio[726]: time="2024-08-19 18:26:44.140651992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a784011c1630fa737c01f73a433114678f21908c6f206e3bb584867304b1c1,PodSandboxId:b4d5818be915b99aca7ccd2c37fef2ad0ccb06443ddada2cd1bd46dd3dc1de38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091454193397783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a50087-cb20-407b-9a87-03d04d230afb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77567c11d5611c1a2c969b8a6fc34055e69a489fbc0107f5e8a20b37da5805c5,PodSandboxId:37767a2eba14bf0fcd760800859ba6d199f9b9e437a1ecb9a59ee974581f4f7d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453555346342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kdrzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db6b602-ca09-40f4-9492-93cbbf919aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8561dfaa22d9dbfaad3595ea21ed0d67f5cf819327f2f363df9e3ea60b4dcc1c,PodSandboxId:c6ad35e9012be2dea475046d15ad05506df9b9212927b7e72670205b59323451,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453465869004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vb6dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d
39fac8-0d53-4380-a903-080414848e24,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fa5dfbb43c52ad39b74aed140c756006f0f11c9a69472d8b92addff0d64b08c,PodSandboxId:2aee12971ae2867607209d69cecb9ae7f33f29a54108669835a23500837ea191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724091452901276652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt5nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd68c04-8a56-4a98-a1fb-21a194ebb5e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6e79f7543342fcc9df91a5421220a96a2eafaf71f5e76e68e905f2c8cea28e,PodSandboxId:a588363c1a4b3a4dc2693e3989d453399725b29e976bedbec900f1f7fddcd5fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091442066204690,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cfc5bf79fdfe549dd37044fd3c5166,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72417b0564131963fc9fb1346896cb455214e683357c704db318a8e7da63198,PodSandboxId:45067d098f0250c360e0a5c565093de48f05ccc6a2d5287faba18c94416613fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091442031130486,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72358b243ceb044a02ff12f886e757,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6011dd9bf6ffffd9fe22a5c5037c57e099dba666794b9a288f314e36ab7dbd,PodSandboxId:5afb929a542e01a963ef67798b3ad1c0c49db77eb44f9304c9293a3f0e1298c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091441983131010,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155f37c341f82292331395fcf8d5a6e110c77aefa45e4e23003310f74d179812,PodSandboxId:f6d7bbca3f21c8665430018c644312239b044a01a1f2ebc0f9a44375c683ada6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091441916728734,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e12b21a98fb74bba8dd48a2456ee75b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e071aa0b0c8a06fb2fa644cf6450b7ba130899af2603d895c947e6291562ca,PodSandboxId:9eb1a6b8f20d4c1ad23b967067caf781c24cc1cf19f2c34b16c33f942c69ac21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091155834445359,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f2845dc-6852-4abf-8f8e-b25420f2caf3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	07a784011c163       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b4d5818be915b       storage-provisioner
	77567c11d5611       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   37767a2eba14b       coredns-6f6b679f8f-kdrzp
	8561dfaa22d9d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   c6ad35e9012be       coredns-6f6b679f8f-vb6dx
	0fa5dfbb43c52       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   2aee12971ae28       kube-proxy-pt5nj
	bf6e79f754334       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   a588363c1a4b3       etcd-no-preload-233969
	a72417b056413       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   45067d098f025       kube-controller-manager-no-preload-233969
	7c6011dd9bf6f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   5afb929a542e0       kube-apiserver-no-preload-233969
	155f37c341f82       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   f6d7bbca3f21c       kube-scheduler-no-preload-233969
	76e071aa0b0c8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   9eb1a6b8f20d4       kube-apiserver-no-preload-233969
	
	
	==> coredns [77567c11d5611c1a2c969b8a6fc34055e69a489fbc0107f5e8a20b37da5805c5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [8561dfaa22d9dbfaad3595ea21ed0d67f5cf819327f2f363df9e3ea60b4dcc1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-233969
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-233969
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=no-preload-233969
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_17_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:17:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-233969
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:26:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:22:43 +0000   Mon, 19 Aug 2024 18:17:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:22:43 +0000   Mon, 19 Aug 2024 18:17:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:22:43 +0000   Mon, 19 Aug 2024 18:17:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:22:43 +0000   Mon, 19 Aug 2024 18:17:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.8
	  Hostname:    no-preload-233969
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef4ac605df354c2fb51fb515363583c1
	  System UUID:                ef4ac605-df35-4c2f-b51f-b515363583c1
	  Boot ID:                    4f188a38-911b-4def-8f27-e5504e459084
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-kdrzp                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-6f6b679f8f-vb6dx                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-no-preload-233969                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-no-preload-233969             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-no-preload-233969    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-pt5nj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-no-preload-233969             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-bfkkf              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node no-preload-233969 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node no-preload-233969 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node no-preload-233969 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node no-preload-233969 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node no-preload-233969 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node no-preload-233969 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s                  node-controller  Node no-preload-233969 event: Registered Node no-preload-233969 in Controller
	
	
	==> dmesg <==
	[  +0.039051] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.009642] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.833961] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529430] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000035] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.317366] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.060791] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056977] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.181447] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.139739] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.272848] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +15.648356] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.057830] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.314347] systemd-fstab-generator[1419]: Ignoring "noauto" option for root device
	[  +3.276875] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.124171] kauditd_printk_skb: 55 callbacks suppressed
	[Aug19 18:13] kauditd_printk_skb: 30 callbacks suppressed
	[Aug19 18:17] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.003493] systemd-fstab-generator[3074]: Ignoring "noauto" option for root device
	[  +4.479238] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.577101] systemd-fstab-generator[3396]: Ignoring "noauto" option for root device
	[  +5.320862] systemd-fstab-generator[3527]: Ignoring "noauto" option for root device
	[  +0.125050] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.624852] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [bf6e79f7543342fcc9df91a5421220a96a2eafaf71f5e76e68e905f2c8cea28e] <==
	{"level":"info","ts":"2024-08-19T18:17:23.273436Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.8:2379"}
	{"level":"info","ts":"2024-08-19T18:17:23.276460Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T18:17:23.279217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T18:17:23.293993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T18:17:23.294082Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-08-19T18:21:02.358465Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":18041298524702058265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T18:21:02.796070Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"515.427003ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.796273Z","caller":"traceutil/trace.go:171","msg":"trace[1995537411] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:660; }","duration":"515.671401ms","start":"2024-08-19T18:21:02.280569Z","end":"2024-08-19T18:21:02.796241Z","steps":["trace[1995537411] 'range keys from in-memory index tree'  (duration: 515.415437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:21:02.796356Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"939.957838ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18041298524702058266 > lease_revoke:<id:7a5f916bdba526ba>","response":"size:29"}
	{"level":"info","ts":"2024-08-19T18:21:02.796937Z","caller":"traceutil/trace.go:171","msg":"trace[987301924] linearizableReadLoop","detail":"{readStateIndex:715; appliedIndex:713; }","duration":"939.382421ms","start":"2024-08-19T18:21:01.857542Z","end":"2024-08-19T18:21:02.796924Z","steps":["trace[987301924] 'read index received'  (duration: 938.373694ms)","trace[987301924] 'applied index is now lower than readState.Index'  (duration: 1.007575ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:21:02.797221Z","caller":"traceutil/trace.go:171","msg":"trace[1702930991] transaction","detail":"{read_only:false; response_revision:661; number_of_response:1; }","duration":"979.150088ms","start":"2024-08-19T18:21:01.818055Z","end":"2024-08-19T18:21:02.797205Z","steps":["trace[1702930991] 'process raft request'  (duration: 978.372082ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:21:02.798043Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:21:01.818028Z","time spent":"979.241984ms","remote":"127.0.0.1:44994","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-233969\" mod_revision:652 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-233969\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-233969\" > >"}
	{"level":"warn","ts":"2024-08-19T18:21:02.976828Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.119267835s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.977061Z","caller":"traceutil/trace.go:171","msg":"trace[764819430] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:661; }","duration":"1.119503489s","start":"2024-08-19T18:21:01.857536Z","end":"2024-08-19T18:21:02.977039Z","steps":["trace[764819430] 'agreement among raft nodes before linearized reading'  (duration: 940.706645ms)","trace[764819430] 'range keys from in-memory index tree'  (duration: 178.539178ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:21:02.977159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:21:01.857495Z","time spent":"1.119645305s","remote":"127.0.0.1:44724","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-19T18:21:02.977362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"919.294159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.977435Z","caller":"traceutil/trace.go:171","msg":"trace[570947311] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:661; }","duration":"919.369093ms","start":"2024-08-19T18:21:02.058053Z","end":"2024-08-19T18:21:02.977422Z","steps":["trace[570947311] 'agreement among raft nodes before linearized reading'  (duration: 740.210413ms)","trace[570947311] 'range keys from in-memory index tree'  (duration: 179.072273ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:21:02.977849Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:21:02.058018Z","time spent":"919.812793ms","remote":"127.0.0.1:44904","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-19T18:21:02.978123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.761586ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-19T18:21:02.978186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.381117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T18:21:02.978236Z","caller":"traceutil/trace.go:171","msg":"trace[146728713] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:661; }","duration":"245.434228ms","start":"2024-08-19T18:21:02.732793Z","end":"2024-08-19T18:21:02.978227Z","steps":["trace[146728713] 'agreement among raft nodes before linearized reading'  (duration: 65.514785ms)","trace[146728713] 'count revisions from in-memory index tree'  (duration: 179.856377ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:21:02.978416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"810.217529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.978460Z","caller":"traceutil/trace.go:171","msg":"trace[353660773] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:661; }","duration":"810.26216ms","start":"2024-08-19T18:21:02.168188Z","end":"2024-08-19T18:21:02.978450Z","steps":["trace[353660773] 'agreement among raft nodes before linearized reading'  (duration: 630.126127ms)","trace[353660773] 'range keys from in-memory index tree'  (duration: 180.035394ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:21:02.978488Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:21:02.168144Z","time spent":"810.336666ms","remote":"127.0.0.1:44730","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-08-19T18:21:02.978191Z","caller":"traceutil/trace.go:171","msg":"trace[1719472852] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:661; }","duration":"181.832427ms","start":"2024-08-19T18:21:02.796349Z","end":"2024-08-19T18:21:02.978181Z","steps":["trace[1719472852] 'range keys from in-memory index tree'  (duration: 179.801051ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:26:44 up 14 min,  0 users,  load average: 0.17, 0.19, 0.17
	Linux no-preload-233969 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [76e071aa0b0c8a06fb2fa644cf6450b7ba130899af2603d895c947e6291562ca] <==
	W0819 18:17:15.676636       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.696779       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.758330       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.771074       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.812085       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.823744       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.858103       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.878549       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.903737       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.905221       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.922456       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.967764       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.069774       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.172736       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.191563       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.197109       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.202569       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.217211       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.221560       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.284725       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.594860       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.679591       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.788770       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.976851       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:17.089894       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7c6011dd9bf6ffffd9fe22a5c5037c57e099dba666794b9a288f314e36ab7dbd] <==
	W0819 18:22:25.659525       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:22:25.659838       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 18:22:25.660999       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:22:25.661069       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:23:25.661252       1 handler_proxy.go:99] no RequestInfo found in the context
	W0819 18:23:25.661513       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:23:25.661650       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0819 18:23:25.661657       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 18:23:25.662824       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:23:25.662890       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:25:25.663563       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:25:25.663844       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 18:25:25.664020       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:25:25.664042       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 18:25:25.665055       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:25:25.665142       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a72417b0564131963fc9fb1346896cb455214e683357c704db318a8e7da63198] <==
	E0819 18:21:31.662061       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:21:32.097456       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:22:01.669595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:22:02.105295       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:22:31.677704       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:22:32.112916       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:22:43.326572       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-233969"
	E0819 18:23:01.683565       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:23:02.120810       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:23:31.691529       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:23:32.131052       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:23:42.240666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="212.254µs"
	I0819 18:23:56.237049       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="148.048µs"
	E0819 18:24:01.698360       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:24:02.139254       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:24:31.705132       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:24:32.149399       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:25:01.712128       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:25:02.159616       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:25:31.718217       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:25:32.172729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:26:01.725374       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:26:02.181374       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:26:31.733174       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:26:32.191245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0fa5dfbb43c52ad39b74aed140c756006f0f11c9a69472d8b92addff0d64b08c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:17:33.480501       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:17:33.546623       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.8"]
	E0819 18:17:33.546716       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:17:33.912797       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:17:33.912901       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:17:33.912988       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:17:33.915176       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:17:33.915452       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:17:33.915486       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:17:33.920472       1 config.go:197] "Starting service config controller"
	I0819 18:17:33.920605       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:17:33.920649       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:17:33.920665       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:17:33.921151       1 config.go:326] "Starting node config controller"
	I0819 18:17:33.921197       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:17:34.020823       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:17:34.020853       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:17:34.021486       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [155f37c341f82292331395fcf8d5a6e110c77aefa45e4e23003310f74d179812] <==
	W0819 18:17:24.698639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:17:24.698763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:24.698888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:17:24.698935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.593232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:17:25.593287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.606138       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 18:17:25.606170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.610633       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 18:17:25.610705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.663813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:17:25.664034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.695643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:17:25.695700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.706305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:17:25.706478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.755170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:17:25.755324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.858020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:17:25.858482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.894545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:17:25.894712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:26.169314       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 18:17:26.169371       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 18:17:28.585641       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:25:37 no-preload-233969 kubelet[3402]: E0819 18:25:37.357786    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091937357540738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:25:37 no-preload-233969 kubelet[3402]: E0819 18:25:37.357810    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091937357540738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:25:38 no-preload-233969 kubelet[3402]: E0819 18:25:38.221154    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	Aug 19 18:25:47 no-preload-233969 kubelet[3402]: E0819 18:25:47.359151    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091947358686751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:25:47 no-preload-233969 kubelet[3402]: E0819 18:25:47.359577    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091947358686751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:25:49 no-preload-233969 kubelet[3402]: E0819 18:25:49.223064    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	Aug 19 18:25:57 no-preload-233969 kubelet[3402]: E0819 18:25:57.362130    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091957361603225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:25:57 no-preload-233969 kubelet[3402]: E0819 18:25:57.362186    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091957361603225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:26:00 no-preload-233969 kubelet[3402]: E0819 18:26:00.221063    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	Aug 19 18:26:07 no-preload-233969 kubelet[3402]: E0819 18:26:07.363480    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091967363256270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:26:07 no-preload-233969 kubelet[3402]: E0819 18:26:07.363517    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091967363256270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:26:12 no-preload-233969 kubelet[3402]: E0819 18:26:12.221733    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	Aug 19 18:26:17 no-preload-233969 kubelet[3402]: E0819 18:26:17.365069    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091977364672002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:26:17 no-preload-233969 kubelet[3402]: E0819 18:26:17.365116    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091977364672002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:26:27 no-preload-233969 kubelet[3402]: E0819 18:26:27.221745    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	Aug 19 18:26:27 no-preload-233969 kubelet[3402]: E0819 18:26:27.245364    3402 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:26:27 no-preload-233969 kubelet[3402]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:26:27 no-preload-233969 kubelet[3402]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:26:27 no-preload-233969 kubelet[3402]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:26:27 no-preload-233969 kubelet[3402]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:26:27 no-preload-233969 kubelet[3402]: E0819 18:26:27.368316    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091987367974832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:26:27 no-preload-233969 kubelet[3402]: E0819 18:26:27.368352    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091987367974832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:26:37 no-preload-233969 kubelet[3402]: E0819 18:26:37.370415    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091997370059326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:26:37 no-preload-233969 kubelet[3402]: E0819 18:26:37.370731    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091997370059326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:26:40 no-preload-233969 kubelet[3402]: E0819 18:26:40.221586    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	
	
	==> storage-provisioner [07a784011c1630fa737c01f73a433114678f21908c6f206e3bb584867304b1c1] <==
	I0819 18:17:34.321868       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:17:34.359710       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:17:34.359799       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:17:34.389186       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:17:34.389428       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-233969_392a54d5-4efb-479b-93c2-958a02d43a17!
	I0819 18:17:34.391085       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afc45dcb-0808-4080-8cf1-3a1b697f30bb", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-233969_392a54d5-4efb-479b-93c2-958a02d43a17 became leader
	I0819 18:17:34.491466       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-233969_392a54d5-4efb-479b-93c2-958a02d43a17!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-233969 -n no-preload-233969
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-233969 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-bfkkf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-233969 describe pod metrics-server-6867b74b74-bfkkf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-233969 describe pod metrics-server-6867b74b74-bfkkf: exit status 1 (68.755683ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-bfkkf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-233969 describe pod metrics-server-6867b74b74-bfkkf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
E0819 18:20:21.263002   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
E0819 18:23:15.961528   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
E0819 18:25:21.262698   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
E0819 18:28:15.961305   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
E0819 18:28:24.338986   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079123 -n old-k8s-version-079123
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 2 (226.722741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-079123" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 2 (224.134424ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-079123 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-975771                              | cert-expiration-975771       | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:06 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-233969                  | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-233969                                   | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233045             | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079123        | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233045                  | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-813424       | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:16 UTC |
	|         | default-k8s-diff-port-813424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079123             | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-233045 image list                           | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-814719 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | disable-driver-mounts-814719                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-306581            | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC | 19 Aug 24 18:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-306581                 | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC | 19 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:15:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:15:52.756356   66229 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:15:52.756664   66229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:15:52.756675   66229 out.go:358] Setting ErrFile to fd 2...
	I0819 18:15:52.756680   66229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:15:52.756881   66229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:15:52.757409   66229 out.go:352] Setting JSON to false
	I0819 18:15:52.758366   66229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7098,"bootTime":1724084255,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:15:52.758430   66229 start.go:139] virtualization: kvm guest
	I0819 18:15:52.760977   66229 out.go:177] * [embed-certs-306581] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:15:52.762479   66229 notify.go:220] Checking for updates...
	I0819 18:15:52.762504   66229 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:15:52.763952   66229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:15:52.765453   66229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:15:52.766810   66229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:15:52.768135   66229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:15:52.769369   66229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:15:52.771017   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:52.771443   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.771504   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.786463   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0819 18:15:52.786925   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.787501   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.787523   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.787800   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.787975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.788239   66229 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:15:52.788527   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.788562   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.803703   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0819 18:15:52.804145   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.804609   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.804625   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.804962   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.805142   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.842707   66229 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:15:52.844070   66229 start.go:297] selected driver: kvm2
	I0819 18:15:52.844092   66229 start.go:901] validating driver "kvm2" against &{Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:52.844258   66229 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:15:52.844998   66229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:52.845085   66229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:15:52.860606   66229 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:15:52.861678   66229 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:15:52.861730   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:15:52.861742   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:15:52.861793   66229 start.go:340] cluster config:
	{Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:52.862003   66229 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:52.864173   66229 out.go:177] * Starting "embed-certs-306581" primary control-plane node in "embed-certs-306581" cluster
	I0819 18:15:52.865772   66229 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:15:52.865819   66229 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:15:52.865827   66229 cache.go:56] Caching tarball of preloaded images
	I0819 18:15:52.865902   66229 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:15:52.865913   66229 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:15:52.866012   66229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/config.json ...
	I0819 18:15:52.866250   66229 start.go:360] acquireMachinesLock for embed-certs-306581: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:15:52.866299   66229 start.go:364] duration metric: took 26.7µs to acquireMachinesLock for "embed-certs-306581"
	I0819 18:15:52.866311   66229 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:15:52.866316   66229 fix.go:54] fixHost starting: 
	I0819 18:15:52.866636   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.866671   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.883154   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0819 18:15:52.883648   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.884149   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.884170   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.884509   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.884710   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.884888   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:15:52.886632   66229 fix.go:112] recreateIfNeeded on embed-certs-306581: state=Running err=<nil>
	W0819 18:15:52.886653   66229 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:15:52.888856   66229 out.go:177] * Updating the running kvm2 "embed-certs-306581" VM ...
	I0819 18:15:50.375775   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:52.376597   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:50.455083   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:50.467702   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:50.467768   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:50.517276   63216 cri.go:89] found id: ""
	I0819 18:15:50.517306   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.517315   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:50.517323   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:50.517399   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:50.550878   63216 cri.go:89] found id: ""
	I0819 18:15:50.550905   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.550914   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:50.550921   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:50.550984   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:50.583515   63216 cri.go:89] found id: ""
	I0819 18:15:50.583543   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.583553   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:50.583560   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:50.583622   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:50.618265   63216 cri.go:89] found id: ""
	I0819 18:15:50.618291   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.618299   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:50.618304   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:50.618362   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:50.653436   63216 cri.go:89] found id: ""
	I0819 18:15:50.653461   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.653469   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:50.653476   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:50.653534   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:50.687715   63216 cri.go:89] found id: ""
	I0819 18:15:50.687745   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.687757   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:50.687764   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:50.687885   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:50.721235   63216 cri.go:89] found id: ""
	I0819 18:15:50.721262   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.721272   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:50.721280   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:50.721328   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:50.754095   63216 cri.go:89] found id: ""
	I0819 18:15:50.754126   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.754134   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:50.754143   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:50.754156   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:50.805661   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:50.805698   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:50.819495   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:50.819536   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:50.887296   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:50.887317   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:50.887334   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:50.966224   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:50.966261   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.508007   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:53.520812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:53.520870   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:53.552790   63216 cri.go:89] found id: ""
	I0819 18:15:53.552816   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.552823   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:53.552829   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:53.552873   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:53.585937   63216 cri.go:89] found id: ""
	I0819 18:15:53.585969   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.585978   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:53.585986   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:53.586057   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:53.618890   63216 cri.go:89] found id: ""
	I0819 18:15:53.618915   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.618922   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:53.618928   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:53.618975   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:53.650045   63216 cri.go:89] found id: ""
	I0819 18:15:53.650069   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.650076   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:53.650082   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:53.650138   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:53.685069   63216 cri.go:89] found id: ""
	I0819 18:15:53.685097   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.685106   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:53.685113   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:53.685179   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:53.717742   63216 cri.go:89] found id: ""
	I0819 18:15:53.717771   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.717778   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:53.717784   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:53.717832   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:53.747768   63216 cri.go:89] found id: ""
	I0819 18:15:53.747798   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.747806   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:53.747812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:53.747858   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:53.779973   63216 cri.go:89] found id: ""
	I0819 18:15:53.779999   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.780006   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:53.780016   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:53.780027   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.815619   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:53.815656   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:53.866767   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:53.866802   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:53.879693   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:53.879721   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:53.947610   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:53.947640   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:53.947659   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:52.172237   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:54.172434   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:52.890101   66229 machine.go:93] provisionDockerMachine start ...
	I0819 18:15:52.890131   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.890374   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:15:52.892900   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:15:52.893405   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:12:30 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:15:52.893431   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:15:52.893613   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:15:52.893796   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:15:52.893979   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:15:52.894149   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:15:52.894328   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:52.894580   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:15:52.894597   66229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:15:55.789130   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:15:54.376799   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:56.884787   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:56.524639   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:56.537312   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:56.537395   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:56.569913   63216 cri.go:89] found id: ""
	I0819 18:15:56.569958   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.569965   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:56.569972   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:56.570031   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:56.602119   63216 cri.go:89] found id: ""
	I0819 18:15:56.602145   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.602152   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:56.602158   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:56.602211   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:56.634864   63216 cri.go:89] found id: ""
	I0819 18:15:56.634900   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.634910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:56.634920   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:56.634982   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:56.667099   63216 cri.go:89] found id: ""
	I0819 18:15:56.667127   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.667136   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:56.667145   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:56.667194   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:56.703539   63216 cri.go:89] found id: ""
	I0819 18:15:56.703562   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.703571   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:56.703576   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:56.703637   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.734668   63216 cri.go:89] found id: ""
	I0819 18:15:56.734691   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.734698   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:56.734703   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:56.734747   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:56.768840   63216 cri.go:89] found id: ""
	I0819 18:15:56.768866   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.768874   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:56.768880   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:56.768925   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:56.800337   63216 cri.go:89] found id: ""
	I0819 18:15:56.800366   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.800375   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:56.800384   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:56.800398   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:56.866036   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:56.866060   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:56.866072   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:56.955372   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:56.955414   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:57.004450   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:57.004477   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:57.057284   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:57.057320   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.570450   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:59.583640   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:59.583729   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:59.617911   63216 cri.go:89] found id: ""
	I0819 18:15:59.617943   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.617954   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:59.617963   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:59.618014   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:59.650239   63216 cri.go:89] found id: ""
	I0819 18:15:59.650265   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.650274   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:59.650279   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:59.650329   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:59.684877   63216 cri.go:89] found id: ""
	I0819 18:15:59.684902   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.684910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:59.684916   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:59.684977   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:59.717378   63216 cri.go:89] found id: ""
	I0819 18:15:59.717402   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.717414   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:59.717428   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:59.717484   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:59.748937   63216 cri.go:89] found id: ""
	I0819 18:15:59.748968   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.748980   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:59.748989   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:59.749058   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.672222   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:59.171375   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:58.861002   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:15:59.375951   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:01.376193   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:03.376512   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:59.781784   63216 cri.go:89] found id: ""
	I0819 18:15:59.781819   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.781830   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:59.781837   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:59.781899   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:59.815593   63216 cri.go:89] found id: ""
	I0819 18:15:59.815626   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.815637   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:59.815645   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:59.815709   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:59.847540   63216 cri.go:89] found id: ""
	I0819 18:15:59.847571   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.847581   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:59.847595   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:59.847609   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.860256   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:59.860292   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:59.931873   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:59.931900   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:59.931915   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:00.011897   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:00.011938   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:00.047600   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:00.047628   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.599457   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:02.617040   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:02.617112   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:02.658148   63216 cri.go:89] found id: ""
	I0819 18:16:02.658173   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.658181   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:02.658187   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:02.658256   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:02.711833   63216 cri.go:89] found id: ""
	I0819 18:16:02.711873   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.711882   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:02.711889   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:02.711945   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:02.746611   63216 cri.go:89] found id: ""
	I0819 18:16:02.746644   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.746652   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:02.746658   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:02.746712   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:02.781731   63216 cri.go:89] found id: ""
	I0819 18:16:02.781757   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.781764   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:02.781771   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:02.781827   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:02.814215   63216 cri.go:89] found id: ""
	I0819 18:16:02.814242   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.814253   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:02.814260   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:02.814320   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:02.848767   63216 cri.go:89] found id: ""
	I0819 18:16:02.848804   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.848815   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:02.848823   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:02.848881   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:02.882890   63216 cri.go:89] found id: ""
	I0819 18:16:02.882913   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.882920   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:02.882927   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:02.882983   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:02.918333   63216 cri.go:89] found id: ""
	I0819 18:16:02.918362   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.918370   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:02.918393   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:02.918405   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.966994   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:02.967024   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:02.980377   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:02.980437   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:03.045097   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:03.045127   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:03.045145   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:03.126682   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:03.126727   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:01.671492   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:04.171471   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:04.941029   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:05.376677   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:05.376705   62749 pod_ready.go:82] duration metric: took 4m0.006404877s for pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:05.376714   62749 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 18:16:05.376720   62749 pod_ready.go:39] duration metric: took 4m6.335802515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:05.376735   62749 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:16:05.376775   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.376822   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.419678   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:05.419719   62749 cri.go:89] found id: ""
	I0819 18:16:05.419728   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:05.419801   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.424210   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.424271   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.459501   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:05.459527   62749 cri.go:89] found id: ""
	I0819 18:16:05.459535   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:05.459578   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.463654   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.463711   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.497591   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:05.497613   62749 cri.go:89] found id: ""
	I0819 18:16:05.497620   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:05.497667   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.501207   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.501274   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.535112   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:05.535141   62749 cri.go:89] found id: ""
	I0819 18:16:05.535150   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:05.535215   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.538855   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.538909   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.573744   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:05.573769   62749 cri.go:89] found id: ""
	I0819 18:16:05.573776   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:05.573824   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.577981   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.578045   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.616545   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:05.616569   62749 cri.go:89] found id: ""
	I0819 18:16:05.616577   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:05.616630   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.620549   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.620597   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.662743   62749 cri.go:89] found id: ""
	I0819 18:16:05.662781   62749 logs.go:276] 0 containers: []
	W0819 18:16:05.662792   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.662800   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:05.662855   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:05.711433   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:05.711456   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:05.711463   62749 cri.go:89] found id: ""
	I0819 18:16:05.711472   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:05.711536   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.716476   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.720240   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:05.720261   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.261474   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:06.261523   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:06.384895   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:06.384927   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:06.421665   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:06.421700   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:06.461866   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:06.461900   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:06.496543   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:06.496570   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:06.551478   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:06.551518   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:06.586858   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.586886   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.625272   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.625300   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:06.697922   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:06.697960   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:06.711624   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:06.711658   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:06.752648   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:06.752677   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:06.796805   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:06.796836   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:05.662843   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:05.680724   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.680811   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.719205   63216 cri.go:89] found id: ""
	I0819 18:16:05.719227   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.719234   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:05.719240   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.719283   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.764548   63216 cri.go:89] found id: ""
	I0819 18:16:05.764577   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.764587   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:05.764593   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.764644   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.800478   63216 cri.go:89] found id: ""
	I0819 18:16:05.800503   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.800521   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:05.800527   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.800582   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.837403   63216 cri.go:89] found id: ""
	I0819 18:16:05.837432   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.837443   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:05.837450   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.837506   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.869330   63216 cri.go:89] found id: ""
	I0819 18:16:05.869357   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.869367   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:05.869375   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.869463   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.900354   63216 cri.go:89] found id: ""
	I0819 18:16:05.900382   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.900393   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:05.900401   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.900457   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.933899   63216 cri.go:89] found id: ""
	I0819 18:16:05.933926   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.933937   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.933944   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:05.934003   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:05.968393   63216 cri.go:89] found id: ""
	I0819 18:16:05.968421   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.968430   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:05.968441   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:05.968458   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:05.980957   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:05.980988   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:06.045310   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:06.045359   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:06.045375   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.124351   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.124389   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.168102   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.168130   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:08.718499   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:08.731535   63216 kubeadm.go:597] duration metric: took 4m4.252819836s to restartPrimaryControlPlane
	W0819 18:16:08.731622   63216 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:08.731651   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:06.172578   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:08.671110   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:08.013019   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:09.338729   62749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:09.355014   62749 api_server.go:72] duration metric: took 4m18.036977131s to wait for apiserver process to appear ...
	I0819 18:16:09.355046   62749 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:16:09.355086   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:09.355148   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:09.390088   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:09.390107   62749 cri.go:89] found id: ""
	I0819 18:16:09.390115   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:09.390161   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.393972   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:09.394024   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:09.426919   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:09.426943   62749 cri.go:89] found id: ""
	I0819 18:16:09.426953   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:09.427007   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.430685   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:09.430755   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:09.465843   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:09.465867   62749 cri.go:89] found id: ""
	I0819 18:16:09.465876   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:09.465936   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.469990   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:09.470057   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:09.503690   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:09.503716   62749 cri.go:89] found id: ""
	I0819 18:16:09.503727   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:09.503789   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.507731   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:09.507791   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:09.541067   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:09.541098   62749 cri.go:89] found id: ""
	I0819 18:16:09.541108   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:09.541169   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.546503   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:09.546568   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:09.587861   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:09.587888   62749 cri.go:89] found id: ""
	I0819 18:16:09.587898   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:09.587960   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.593765   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:09.593831   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:09.628426   62749 cri.go:89] found id: ""
	I0819 18:16:09.628456   62749 logs.go:276] 0 containers: []
	W0819 18:16:09.628464   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:09.628470   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:09.628529   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:09.666596   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:09.666622   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:09.666628   62749 cri.go:89] found id: ""
	I0819 18:16:09.666636   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:09.666688   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.670929   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.674840   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:09.674863   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:09.708286   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:09.708313   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:09.739212   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:09.739234   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:10.171487   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:10.171535   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:10.208985   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:10.209025   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:10.222001   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:10.222028   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:10.267193   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:10.267225   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:10.300082   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:10.300110   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:10.333403   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:10.333434   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:10.371961   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:10.371989   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:10.425550   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:10.425586   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:10.500742   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:10.500796   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:10.602484   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:10.602518   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:13.149769   62749 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8444/healthz ...
	I0819 18:16:13.154238   62749 api_server.go:279] https://192.168.61.243:8444/healthz returned 200:
	ok
	I0819 18:16:13.155139   62749 api_server.go:141] control plane version: v1.31.0
	I0819 18:16:13.155154   62749 api_server.go:131] duration metric: took 3.800101993s to wait for apiserver health ...
	I0819 18:16:13.155161   62749 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:16:13.155180   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:13.155232   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:13.194723   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:13.194749   62749 cri.go:89] found id: ""
	I0819 18:16:13.194759   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:13.194811   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.198645   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:13.198703   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:13.236332   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:13.236405   62749 cri.go:89] found id: ""
	I0819 18:16:13.236418   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:13.236473   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.240682   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:13.240764   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:13.277257   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:13.277283   62749 cri.go:89] found id: ""
	I0819 18:16:13.277290   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:13.277339   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.281458   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:13.281516   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:13.319419   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:13.319444   62749 cri.go:89] found id: ""
	I0819 18:16:13.319453   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:13.319508   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.323377   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:13.323444   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:13.357320   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:13.357344   62749 cri.go:89] found id: ""
	I0819 18:16:13.357353   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:13.357417   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.361505   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:13.361582   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:13.396379   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:13.396396   62749 cri.go:89] found id: ""
	I0819 18:16:13.396403   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:13.396457   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.400372   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:13.400442   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:13.433520   62749 cri.go:89] found id: ""
	I0819 18:16:13.433551   62749 logs.go:276] 0 containers: []
	W0819 18:16:13.433561   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:13.433569   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:13.433629   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:13.467382   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:13.467411   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:13.467418   62749 cri.go:89] found id: ""
	I0819 18:16:13.467427   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:13.467486   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.471371   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.474905   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:13.474924   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:13.547564   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:13.547596   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:13.593702   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:13.593731   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:13.629610   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:13.629634   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:13.669337   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:13.669372   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:13.729986   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:13.730012   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:13.766424   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:13.766459   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:13.806677   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:13.806702   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:13.540438   63216 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.808760826s)
	I0819 18:16:13.540508   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:13.555141   63216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:16:13.565159   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:16:13.575671   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:16:13.575689   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:16:13.575743   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:16:13.586181   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:16:13.586388   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:16:13.597239   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:16:13.606788   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:16:13.606857   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:16:13.616964   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.627128   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:16:13.627195   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.637263   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:16:13.646834   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:16:13.646898   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:16:13.657566   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:16:13.887585   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:16:11.171886   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:13.672521   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:14.199046   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:14.199103   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:14.213508   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:14.213537   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:14.341980   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:14.342017   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:14.389817   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:14.389853   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:14.425890   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:14.425928   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:16.991182   62749 system_pods.go:59] 8 kube-system pods found
	I0819 18:16:16.991211   62749 system_pods.go:61] "coredns-6f6b679f8f-4jvnz" [d81201db-1102-436b-ac29-dd201584de2d] Running
	I0819 18:16:16.991217   62749 system_pods.go:61] "etcd-default-k8s-diff-port-813424" [c9b60af5-479e-40b8-af56-2b1daef22843] Running
	I0819 18:16:16.991221   62749 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-813424" [b2f825b3-375c-46fb-9714-b0ed0be6ea51] Running
	I0819 18:16:16.991225   62749 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-813424" [c691a1e5-c01d-4bb1-8e37-c542524f9544] Running
	I0819 18:16:16.991229   62749 system_pods.go:61] "kube-proxy-j4x48" [886f5fe5-070e-419c-a9bb-5b95f7496717] Running
	I0819 18:16:16.991232   62749 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-813424" [b7bac76d-1783-40f1-adb2-75174ec8486e] Running
	I0819 18:16:16.991239   62749 system_pods.go:61] "metrics-server-6867b74b74-tp742" [aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:16:16.991243   62749 system_pods.go:61] "storage-provisioner" [658a37e1-39b6-4fa9-8f23-71518ebda8dc] Running
	I0819 18:16:16.991250   62749 system_pods.go:74] duration metric: took 3.836084784s to wait for pod list to return data ...
	I0819 18:16:16.991257   62749 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:16:16.993181   62749 default_sa.go:45] found service account: "default"
	I0819 18:16:16.993201   62749 default_sa.go:55] duration metric: took 1.93729ms for default service account to be created ...
	I0819 18:16:16.993208   62749 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:16:16.997803   62749 system_pods.go:86] 8 kube-system pods found
	I0819 18:16:16.997825   62749 system_pods.go:89] "coredns-6f6b679f8f-4jvnz" [d81201db-1102-436b-ac29-dd201584de2d] Running
	I0819 18:16:16.997830   62749 system_pods.go:89] "etcd-default-k8s-diff-port-813424" [c9b60af5-479e-40b8-af56-2b1daef22843] Running
	I0819 18:16:16.997835   62749 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-813424" [b2f825b3-375c-46fb-9714-b0ed0be6ea51] Running
	I0819 18:16:16.997840   62749 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-813424" [c691a1e5-c01d-4bb1-8e37-c542524f9544] Running
	I0819 18:16:16.997844   62749 system_pods.go:89] "kube-proxy-j4x48" [886f5fe5-070e-419c-a9bb-5b95f7496717] Running
	I0819 18:16:16.997848   62749 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-813424" [b7bac76d-1783-40f1-adb2-75174ec8486e] Running
	I0819 18:16:16.997854   62749 system_pods.go:89] "metrics-server-6867b74b74-tp742" [aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:16:16.997861   62749 system_pods.go:89] "storage-provisioner" [658a37e1-39b6-4fa9-8f23-71518ebda8dc] Running
	I0819 18:16:16.997868   62749 system_pods.go:126] duration metric: took 4.655661ms to wait for k8s-apps to be running ...
	I0819 18:16:16.997877   62749 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:16:16.997917   62749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:17.013524   62749 system_svc.go:56] duration metric: took 15.634104ms WaitForService to wait for kubelet
	I0819 18:16:17.013559   62749 kubeadm.go:582] duration metric: took 4m25.695525816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:16:17.013585   62749 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:16:17.016278   62749 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:16:17.016301   62749 node_conditions.go:123] node cpu capacity is 2
	I0819 18:16:17.016315   62749 node_conditions.go:105] duration metric: took 2.723578ms to run NodePressure ...
	I0819 18:16:17.016326   62749 start.go:241] waiting for startup goroutines ...
	I0819 18:16:17.016336   62749 start.go:246] waiting for cluster config update ...
	I0819 18:16:17.016351   62749 start.go:255] writing updated cluster config ...
	I0819 18:16:17.016817   62749 ssh_runner.go:195] Run: rm -f paused
	I0819 18:16:17.063056   62749 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:16:17.065819   62749 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-813424" cluster and "default" namespace by default
	I0819 18:16:14.093007   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:17.164989   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:16.172074   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:18.670402   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:20.671024   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:22.671462   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:26.288975   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:25.175354   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:27.671452   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:29.671496   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:29.357082   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:31.671726   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:33.672458   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:35.437060   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:36.171920   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:38.172318   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:38.513064   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:40.670687   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:42.670858   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.671276   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.589000   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:47.660996   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:47.171302   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:49.171707   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:51.675414   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:53.665939   62137 pod_ready.go:82] duration metric: took 4m0.001066956s for pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:53.665969   62137 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 18:16:53.665994   62137 pod_ready.go:39] duration metric: took 4m12.464901403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:53.666051   62137 kubeadm.go:597] duration metric: took 4m20.502224967s to restartPrimaryControlPlane
	W0819 18:16:53.666114   62137 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:53.666143   62137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:53.740978   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:56.817027   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:02.892936   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:05.965053   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:12.048961   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:15.116969   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:19.922253   62137 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.256081543s)
	I0819 18:17:19.922334   62137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:19.937012   62137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:17:19.946269   62137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:17:19.955344   62137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:17:19.955363   62137 kubeadm.go:157] found existing configuration files:
	
	I0819 18:17:19.955405   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:17:19.963979   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:17:19.964039   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:17:19.972679   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:17:19.980890   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:17:19.980947   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:17:19.989705   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:17:19.998606   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:17:19.998664   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:17:20.007553   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:17:20.016136   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:17:20.016185   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:17:20.024827   62137 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:17:20.073205   62137 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:17:20.073284   62137 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:17:20.186906   62137 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:17:20.187034   62137 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:17:20.187125   62137 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:17:20.198750   62137 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:17:20.200704   62137 out.go:235]   - Generating certificates and keys ...
	I0819 18:17:20.200810   62137 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:17:20.200905   62137 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:17:20.201015   62137 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:17:20.201099   62137 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:17:20.201202   62137 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:17:20.201279   62137 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:17:20.201370   62137 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:17:20.201468   62137 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:17:20.201578   62137 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:17:20.201686   62137 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:17:20.201743   62137 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:17:20.201823   62137 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:17:20.386866   62137 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:17:20.483991   62137 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:17:20.575440   62137 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:17:20.704349   62137 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:17:20.834890   62137 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:17:20.835583   62137 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:17:20.839290   62137 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:17:21.197002   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:20.841232   62137 out.go:235]   - Booting up control plane ...
	I0819 18:17:20.841313   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:17:20.841374   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:17:20.841428   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:17:20.858185   62137 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:17:20.866369   62137 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:17:20.866447   62137 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:17:20.997302   62137 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:17:20.997435   62137 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:17:21.499506   62137 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041994ms
	I0819 18:17:21.499625   62137 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:17:26.501489   62137 kubeadm.go:310] [api-check] The API server is healthy after 5.002014094s
	I0819 18:17:26.514398   62137 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:17:26.534278   62137 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:17:26.557460   62137 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:17:26.557706   62137 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-233969 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:17:26.569142   62137 kubeadm.go:310] [bootstrap-token] Using token: 2skh80.c6u95wnw3x4gmagv
	I0819 18:17:24.273082   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:26.570814   62137 out.go:235]   - Configuring RBAC rules ...
	I0819 18:17:26.570940   62137 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:17:26.583073   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:17:26.592407   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:17:26.595488   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:17:26.599062   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:17:26.603754   62137 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:17:26.908245   62137 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:17:27.340277   62137 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:17:27.909394   62137 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:17:27.912696   62137 kubeadm.go:310] 
	I0819 18:17:27.912811   62137 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:17:27.912834   62137 kubeadm.go:310] 
	I0819 18:17:27.912953   62137 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:17:27.912965   62137 kubeadm.go:310] 
	I0819 18:17:27.912996   62137 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:17:27.913086   62137 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:17:27.913166   62137 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:17:27.913178   62137 kubeadm.go:310] 
	I0819 18:17:27.913246   62137 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:17:27.913266   62137 kubeadm.go:310] 
	I0819 18:17:27.913338   62137 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:17:27.913349   62137 kubeadm.go:310] 
	I0819 18:17:27.913422   62137 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:17:27.913527   62137 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:17:27.913613   62137 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:17:27.913622   62137 kubeadm.go:310] 
	I0819 18:17:27.913727   62137 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:17:27.913827   62137 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:17:27.913842   62137 kubeadm.go:310] 
	I0819 18:17:27.913934   62137 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2skh80.c6u95wnw3x4gmagv \
	I0819 18:17:27.914073   62137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:17:27.914112   62137 kubeadm.go:310] 	--control-plane 
	I0819 18:17:27.914121   62137 kubeadm.go:310] 
	I0819 18:17:27.914223   62137 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:17:27.914235   62137 kubeadm.go:310] 
	I0819 18:17:27.914353   62137 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2skh80.c6u95wnw3x4gmagv \
	I0819 18:17:27.914499   62137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:17:27.916002   62137 kubeadm.go:310] W0819 18:17:20.045306    3048 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:17:27.916280   62137 kubeadm.go:310] W0819 18:17:20.046268    3048 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:17:27.916390   62137 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:17:27.916417   62137 cni.go:84] Creating CNI manager for ""
	I0819 18:17:27.916426   62137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:17:27.918384   62137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:17:27.919646   62137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:17:27.930298   62137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:17:27.946332   62137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:17:27.946440   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:27.946462   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-233969 minikube.k8s.io/updated_at=2024_08_19T18_17_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=no-preload-233969 minikube.k8s.io/primary=true
	I0819 18:17:27.972836   62137 ops.go:34] apiserver oom_adj: -16
	I0819 18:17:28.134899   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:28.635909   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:29.135326   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:29.635339   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:30.135992   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:30.635626   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:31.135493   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:31.635632   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:32.135812   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:32.208229   62137 kubeadm.go:1113] duration metric: took 4.261865811s to wait for elevateKubeSystemPrivileges
	I0819 18:17:32.208254   62137 kubeadm.go:394] duration metric: took 4m59.094587246s to StartCluster
	I0819 18:17:32.208270   62137 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:17:32.208350   62137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:17:32.210604   62137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:17:32.210888   62137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:17:32.210967   62137 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:17:32.211052   62137 addons.go:69] Setting storage-provisioner=true in profile "no-preload-233969"
	I0819 18:17:32.211070   62137 addons.go:69] Setting default-storageclass=true in profile "no-preload-233969"
	I0819 18:17:32.211088   62137 addons.go:234] Setting addon storage-provisioner=true in "no-preload-233969"
	I0819 18:17:32.211084   62137 addons.go:69] Setting metrics-server=true in profile "no-preload-233969"
	W0819 18:17:32.211096   62137 addons.go:243] addon storage-provisioner should already be in state true
	I0819 18:17:32.211102   62137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-233969"
	I0819 18:17:32.211125   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.211126   62137 addons.go:234] Setting addon metrics-server=true in "no-preload-233969"
	W0819 18:17:32.211166   62137 addons.go:243] addon metrics-server should already be in state true
	I0819 18:17:32.211198   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.211124   62137 config.go:182] Loaded profile config "no-preload-233969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:17:32.211475   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211505   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.211589   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211601   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211619   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.211623   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.212714   62137 out.go:177] * Verifying Kubernetes components...
	I0819 18:17:32.214075   62137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:17:32.227207   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0819 18:17:32.227219   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0819 18:17:32.227615   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.227709   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.228122   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.228142   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.228216   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.228236   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.228543   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.228610   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.229074   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.229112   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.229120   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.229147   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.230316   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0819 18:17:32.230746   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.231408   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.231437   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.231812   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.232018   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.235965   62137 addons.go:234] Setting addon default-storageclass=true in "no-preload-233969"
	W0819 18:17:32.235986   62137 addons.go:243] addon default-storageclass should already be in state true
	I0819 18:17:32.236013   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.236365   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.236392   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.244668   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0819 18:17:32.245056   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.245506   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.245534   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.245816   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0819 18:17:32.245848   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.245989   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.246239   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.246795   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.246811   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.247182   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.247380   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.248517   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.249498   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.250817   62137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 18:17:32.251649   62137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:17:30.348988   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:32.252466   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 18:17:32.252483   62137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 18:17:32.252501   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.253309   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0819 18:17:32.253687   62137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:17:32.253701   62137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:17:32.253717   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.253828   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.254340   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.254352   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.254706   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.255288   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.255324   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.256274   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.256776   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.256796   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.256970   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.257109   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.257229   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.257348   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.257756   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.258132   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.258144   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.258384   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.258531   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.258663   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.258788   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.271706   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0819 18:17:32.272115   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.272558   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.272575   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.272875   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.273041   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.274711   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.274914   62137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:17:32.274924   62137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:17:32.274936   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.277689   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.278191   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.278246   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.278358   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.278533   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.278701   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.278847   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.423546   62137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:17:32.445680   62137 node_ready.go:35] waiting up to 6m0s for node "no-preload-233969" to be "Ready" ...
	I0819 18:17:32.471999   62137 node_ready.go:49] node "no-preload-233969" has status "Ready":"True"
	I0819 18:17:32.472028   62137 node_ready.go:38] duration metric: took 26.307315ms for node "no-preload-233969" to be "Ready" ...
	I0819 18:17:32.472041   62137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:32.478401   62137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:32.518483   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:17:32.568928   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 18:17:32.568953   62137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 18:17:32.592301   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:17:32.645484   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 18:17:32.645513   62137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 18:17:32.715522   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:17:32.715552   62137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 18:17:32.781693   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:17:33.756997   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.238477445s)
	I0819 18:17:33.757035   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757044   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757051   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.164710772s)
	I0819 18:17:33.757088   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757101   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757454   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757450   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757466   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757475   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757483   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757490   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757538   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757564   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757616   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757640   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757712   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757729   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757733   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757852   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757915   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757937   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.831562   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.831588   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.831891   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.831907   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928005   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146269845s)
	I0819 18:17:33.928064   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.928082   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.928391   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.928438   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.928452   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928465   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.928477   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.928809   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.928820   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.928835   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928851   62137 addons.go:475] Verifying addon metrics-server=true in "no-preload-233969"
	I0819 18:17:33.930974   62137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 18:17:33.932101   62137 addons.go:510] duration metric: took 1.72114773s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 18:17:34.486566   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:33.421045   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:36.984891   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:39.484617   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:39.500962   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:42.572983   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:41.990189   62137 pod_ready.go:93] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:41.990210   62137 pod_ready.go:82] duration metric: took 9.511780534s for pod "etcd-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.990221   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.997282   62137 pod_ready.go:93] pod "kube-apiserver-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:41.997301   62137 pod_ready.go:82] duration metric: took 7.074393ms for pod "kube-apiserver-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.997310   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.008757   62137 pod_ready.go:93] pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.008775   62137 pod_ready.go:82] duration metric: took 11.458424ms for pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.008785   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt5nj" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.017802   62137 pod_ready.go:93] pod "kube-proxy-pt5nj" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.017820   62137 pod_ready.go:82] duration metric: took 9.029628ms for pod "kube-proxy-pt5nj" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.017828   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.025402   62137 pod_ready.go:93] pod "kube-scheduler-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.025424   62137 pod_ready.go:82] duration metric: took 7.589229ms for pod "kube-scheduler-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.025433   62137 pod_ready.go:39] duration metric: took 9.553379252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:42.025451   62137 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:17:42.025508   62137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:17:42.043190   62137 api_server.go:72] duration metric: took 9.832267712s to wait for apiserver process to appear ...
	I0819 18:17:42.043214   62137 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:17:42.043231   62137 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I0819 18:17:42.051124   62137 api_server.go:279] https://192.168.50.8:8443/healthz returned 200:
	ok
	I0819 18:17:42.052367   62137 api_server.go:141] control plane version: v1.31.0
	I0819 18:17:42.052392   62137 api_server.go:131] duration metric: took 9.170652ms to wait for apiserver health ...
	I0819 18:17:42.052404   62137 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:17:42.187227   62137 system_pods.go:59] 9 kube-system pods found
	I0819 18:17:42.187254   62137 system_pods.go:61] "coredns-6f6b679f8f-kdrzp" [0db6b602-ca09-40f4-9492-93cbbf919aa7] Running
	I0819 18:17:42.187259   62137 system_pods.go:61] "coredns-6f6b679f8f-vb6dx" [0d39fac8-0d53-4380-a903-080414848e24] Running
	I0819 18:17:42.187263   62137 system_pods.go:61] "etcd-no-preload-233969" [4974eb7f-625a-43fb-b0c4-09cf5b4c0829] Running
	I0819 18:17:42.187267   62137 system_pods.go:61] "kube-apiserver-no-preload-233969" [eb582488-97d8-494c-95ba-6c9e15ff433b] Running
	I0819 18:17:42.187270   62137 system_pods.go:61] "kube-controller-manager-no-preload-233969" [cf978fab-7333-45bb-adc8-127b905707a7] Running
	I0819 18:17:42.187273   62137 system_pods.go:61] "kube-proxy-pt5nj" [dfd68c04-8a56-4a98-a1fb-21a194ebb5e3] Running
	I0819 18:17:42.187277   62137 system_pods.go:61] "kube-scheduler-no-preload-233969" [d08ef733-88fa-43da-92ac-aee974a6c2d2] Running
	I0819 18:17:42.187282   62137 system_pods.go:61] "metrics-server-6867b74b74-bfkkf" [00206622-fe4f-4f26-8f69-ac7fb6a39805] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:17:42.187285   62137 system_pods.go:61] "storage-provisioner" [67a50087-cb20-407b-9a87-03d04d230afb] Running
	I0819 18:17:42.187292   62137 system_pods.go:74] duration metric: took 134.882111ms to wait for pod list to return data ...
	I0819 18:17:42.187299   62137 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:17:42.382612   62137 default_sa.go:45] found service account: "default"
	I0819 18:17:42.382643   62137 default_sa.go:55] duration metric: took 195.337173ms for default service account to be created ...
	I0819 18:17:42.382652   62137 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:17:42.585988   62137 system_pods.go:86] 9 kube-system pods found
	I0819 18:17:42.586024   62137 system_pods.go:89] "coredns-6f6b679f8f-kdrzp" [0db6b602-ca09-40f4-9492-93cbbf919aa7] Running
	I0819 18:17:42.586032   62137 system_pods.go:89] "coredns-6f6b679f8f-vb6dx" [0d39fac8-0d53-4380-a903-080414848e24] Running
	I0819 18:17:42.586038   62137 system_pods.go:89] "etcd-no-preload-233969" [4974eb7f-625a-43fb-b0c4-09cf5b4c0829] Running
	I0819 18:17:42.586044   62137 system_pods.go:89] "kube-apiserver-no-preload-233969" [eb582488-97d8-494c-95ba-6c9e15ff433b] Running
	I0819 18:17:42.586049   62137 system_pods.go:89] "kube-controller-manager-no-preload-233969" [cf978fab-7333-45bb-adc8-127b905707a7] Running
	I0819 18:17:42.586056   62137 system_pods.go:89] "kube-proxy-pt5nj" [dfd68c04-8a56-4a98-a1fb-21a194ebb5e3] Running
	I0819 18:17:42.586062   62137 system_pods.go:89] "kube-scheduler-no-preload-233969" [d08ef733-88fa-43da-92ac-aee974a6c2d2] Running
	I0819 18:17:42.586072   62137 system_pods.go:89] "metrics-server-6867b74b74-bfkkf" [00206622-fe4f-4f26-8f69-ac7fb6a39805] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:17:42.586078   62137 system_pods.go:89] "storage-provisioner" [67a50087-cb20-407b-9a87-03d04d230afb] Running
	I0819 18:17:42.586089   62137 system_pods.go:126] duration metric: took 203.431371ms to wait for k8s-apps to be running ...
	I0819 18:17:42.586101   62137 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:17:42.586154   62137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:42.601268   62137 system_svc.go:56] duration metric: took 15.156104ms WaitForService to wait for kubelet
	I0819 18:17:42.601305   62137 kubeadm.go:582] duration metric: took 10.39038433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:17:42.601330   62137 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:17:42.783030   62137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:17:42.783058   62137 node_conditions.go:123] node cpu capacity is 2
	I0819 18:17:42.783069   62137 node_conditions.go:105] duration metric: took 181.734608ms to run NodePressure ...
	I0819 18:17:42.783080   62137 start.go:241] waiting for startup goroutines ...
	I0819 18:17:42.783087   62137 start.go:246] waiting for cluster config update ...
	I0819 18:17:42.783097   62137 start.go:255] writing updated cluster config ...
	I0819 18:17:42.783349   62137 ssh_runner.go:195] Run: rm -f paused
	I0819 18:17:42.831445   62137 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:17:42.833881   62137 out.go:177] * Done! kubectl is now configured to use "no-preload-233969" cluster and "default" namespace by default
	I0819 18:17:48.653035   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:51.725070   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:57.805043   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:00.881114   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:06.956979   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:09.974002   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:18:09.974108   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:18:09.975602   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:18:09.975650   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:18:09.975736   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:18:09.975861   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:18:09.975993   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:18:09.976086   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:18:09.978023   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:18:09.978100   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:18:09.978157   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:18:09.978230   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:18:09.978281   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:18:09.978358   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:18:09.978408   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:18:09.978466   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:18:09.978529   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:18:09.978645   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:18:09.978758   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:18:09.978816   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:18:09.978890   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:18:09.978973   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:18:09.979046   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:18:09.979138   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:18:09.979191   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:18:09.979339   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:18:09.979438   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:18:09.979503   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:18:09.979595   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:18:10.028995   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:09.981931   63216 out.go:235]   - Booting up control plane ...
	I0819 18:18:09.982014   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:18:09.982087   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:18:09.982142   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:18:09.982213   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:18:09.982378   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:18:09.982432   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:18:09.982491   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982715   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982914   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982996   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983204   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983268   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983424   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983485   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983656   63216 kubeadm.go:310] 
	I0819 18:18:09.983705   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:18:09.983747   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:18:09.983754   63216 kubeadm.go:310] 
	I0819 18:18:09.983788   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:18:09.983818   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:18:09.983957   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:18:09.983982   63216 kubeadm.go:310] 
	I0819 18:18:09.984089   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:18:09.984119   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:18:09.984175   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:18:09.984186   63216 kubeadm.go:310] 
	I0819 18:18:09.984277   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:18:09.984372   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:18:09.984378   63216 kubeadm.go:310] 
	I0819 18:18:09.984474   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:18:09.984552   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:18:09.984621   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:18:09.984699   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:18:09.984762   63216 kubeadm.go:310] 
	W0819 18:18:09.984832   63216 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 18:18:09.984873   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:18:10.439037   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:18:10.453739   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:18:10.463241   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:18:10.463262   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:18:10.463313   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:18:10.472407   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:18:10.472467   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:18:10.481297   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:18:10.489478   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:18:10.489542   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:18:10.498042   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.506373   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:18:10.506433   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.515158   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:18:10.523412   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:18:10.523483   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:18:10.532060   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:18:10.746836   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:18:16.109014   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:19.180970   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:25.261041   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:28.333057   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:34.412966   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:37.485036   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:43.565013   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:46.637059   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:52.716967   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:55.789060   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:01.869005   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:04.941027   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:11.020989   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:14.093067   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:20.173021   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:23.248974   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:29.324961   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:32.397037   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:38.477031   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:41.549001   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:47.629019   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:50.700996   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:56.781035   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:59.853000   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:06.430174   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:20:06.430256   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:20:06.431894   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:20:06.431968   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:20:06.432060   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:20:06.432203   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:20:06.432334   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:20:06.432440   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:20:06.434250   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:20:06.434349   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:20:06.434444   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:20:06.434563   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:20:06.434623   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:20:06.434717   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:20:06.434805   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:20:06.434894   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:20:06.434974   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:20:06.435052   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:20:06.435135   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:20:06.435204   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:20:06.435288   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:20:06.435365   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:20:06.435421   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:20:06.435474   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:20:06.435531   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:20:06.435689   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:20:06.435781   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:20:06.435827   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:20:06.435886   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:20:06.437538   63216 out.go:235]   - Booting up control plane ...
	I0819 18:20:06.437678   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:20:06.437771   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:20:06.437852   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:20:06.437928   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:20:06.438063   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:20:06.438105   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:20:06.438164   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438342   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438416   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438568   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438637   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438821   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438902   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439167   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439264   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439458   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439472   63216 kubeadm.go:310] 
	I0819 18:20:06.439514   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:20:06.439547   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:20:06.439553   63216 kubeadm.go:310] 
	I0819 18:20:06.439583   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:20:06.439626   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:20:06.439732   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:20:06.439749   63216 kubeadm.go:310] 
	I0819 18:20:06.439873   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:20:06.439915   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:20:06.439944   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:20:06.439952   63216 kubeadm.go:310] 
	I0819 18:20:06.440039   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:20:06.440106   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:20:06.440113   63216 kubeadm.go:310] 
	I0819 18:20:06.440252   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:20:06.440329   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:20:06.440392   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:20:06.440458   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:20:06.440521   63216 kubeadm.go:394] duration metric: took 8m2.012853316s to StartCluster
	I0819 18:20:06.440524   63216 kubeadm.go:310] 
	I0819 18:20:06.440559   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:20:06.440610   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:20:06.481255   63216 cri.go:89] found id: ""
	I0819 18:20:06.481285   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.481297   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:20:06.481305   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:20:06.481364   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:20:06.516769   63216 cri.go:89] found id: ""
	I0819 18:20:06.516801   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.516811   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:20:06.516818   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:20:06.516933   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:20:06.551964   63216 cri.go:89] found id: ""
	I0819 18:20:06.551998   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.552006   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:20:06.552014   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:20:06.552108   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:20:06.586084   63216 cri.go:89] found id: ""
	I0819 18:20:06.586115   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.586124   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:20:06.586131   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:20:06.586189   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:20:06.620732   63216 cri.go:89] found id: ""
	I0819 18:20:06.620773   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.620785   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:20:06.620792   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:20:06.620843   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:20:06.659731   63216 cri.go:89] found id: ""
	I0819 18:20:06.659762   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.659772   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:20:06.659780   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:20:06.659846   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:20:06.694223   63216 cri.go:89] found id: ""
	I0819 18:20:06.694257   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.694267   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:20:06.694275   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:20:06.694337   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:20:06.727474   63216 cri.go:89] found id: ""
	I0819 18:20:06.727508   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.727518   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:20:06.727528   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:20:06.727538   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:20:06.778006   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:20:06.778041   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:20:06.792059   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:20:06.792089   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:20:06.863596   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:20:06.863625   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:20:06.863637   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:20:06.979710   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:20:06.979752   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 18:20:07.030879   63216 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 18:20:07.030930   63216 out.go:270] * 
	W0819 18:20:07.031004   63216 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.031025   63216 out.go:270] * 
	W0819 18:20:07.031896   63216 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:20:07.035220   63216 out.go:201] 
	W0819 18:20:07.036384   63216 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.036435   63216 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 18:20:07.036466   63216 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 18:20:07.037783   63216 out.go:201] 
	I0819 18:20:05.933003   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:09.009065   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:15.085040   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:18.160990   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:24.236968   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:27.308959   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:30.310609   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:20:30.310648   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:30.310938   66229 buildroot.go:166] provisioning hostname "embed-certs-306581"
	I0819 18:20:30.310975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:30.311173   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:30.312703   66229 machine.go:96] duration metric: took 4m37.4225796s to provisionDockerMachine
	I0819 18:20:30.312767   66229 fix.go:56] duration metric: took 4m37.446430724s for fixHost
	I0819 18:20:30.312775   66229 start.go:83] releasing machines lock for "embed-certs-306581", held for 4m37.446469265s
	W0819 18:20:30.312789   66229 start.go:714] error starting host: provision: host is not running
	W0819 18:20:30.312878   66229 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 18:20:30.312887   66229 start.go:729] Will try again in 5 seconds ...
	I0819 18:20:35.313124   66229 start.go:360] acquireMachinesLock for embed-certs-306581: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:20:35.313223   66229 start.go:364] duration metric: took 60.186µs to acquireMachinesLock for "embed-certs-306581"
	I0819 18:20:35.313247   66229 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:20:35.313256   66229 fix.go:54] fixHost starting: 
	I0819 18:20:35.313555   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:20:35.313581   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:20:35.330972   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38153
	I0819 18:20:35.331433   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:20:35.331878   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:20:35.331897   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:20:35.332189   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:20:35.332376   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:35.332546   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:20:35.334335   66229 fix.go:112] recreateIfNeeded on embed-certs-306581: state=Stopped err=<nil>
	I0819 18:20:35.334360   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	W0819 18:20:35.334529   66229 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:20:35.336031   66229 out.go:177] * Restarting existing kvm2 VM for "embed-certs-306581" ...
	I0819 18:20:35.337027   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Start
	I0819 18:20:35.337166   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring networks are active...
	I0819 18:20:35.337905   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring network default is active
	I0819 18:20:35.338212   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring network mk-embed-certs-306581 is active
	I0819 18:20:35.338534   66229 main.go:141] libmachine: (embed-certs-306581) Getting domain xml...
	I0819 18:20:35.339265   66229 main.go:141] libmachine: (embed-certs-306581) Creating domain...
	I0819 18:20:36.576142   66229 main.go:141] libmachine: (embed-certs-306581) Waiting to get IP...
	I0819 18:20:36.577067   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:36.577471   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:36.577553   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:36.577459   67882 retry.go:31] will retry after 288.282156ms: waiting for machine to come up
	I0819 18:20:36.866897   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:36.867437   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:36.867507   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:36.867415   67882 retry.go:31] will retry after 357.773556ms: waiting for machine to come up
	I0819 18:20:37.227139   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:37.227672   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:37.227697   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:37.227620   67882 retry.go:31] will retry after 360.777442ms: waiting for machine to come up
	I0819 18:20:37.590245   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:37.590696   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:37.590725   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:37.590672   67882 retry.go:31] will retry after 502.380794ms: waiting for machine to come up
	I0819 18:20:38.094422   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:38.094938   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:38.094963   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:38.094893   67882 retry.go:31] will retry after 716.370935ms: waiting for machine to come up
	I0819 18:20:38.812922   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:38.813416   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:38.813437   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:38.813381   67882 retry.go:31] will retry after 728.320282ms: waiting for machine to come up
	I0819 18:20:39.543316   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:39.543705   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:39.543731   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:39.543668   67882 retry.go:31] will retry after 725.532345ms: waiting for machine to come up
	I0819 18:20:40.270826   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:40.271325   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:40.271347   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:40.271280   67882 retry.go:31] will retry after 1.054064107s: waiting for machine to come up
	I0819 18:20:41.326463   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:41.326952   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:41.326983   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:41.326896   67882 retry.go:31] will retry after 1.258426337s: waiting for machine to come up
	I0819 18:20:42.587252   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:42.587685   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:42.587715   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:42.587645   67882 retry.go:31] will retry after 1.884128664s: waiting for machine to come up
	I0819 18:20:44.474042   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:44.474569   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:44.474592   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:44.474528   67882 retry.go:31] will retry after 2.484981299s: waiting for machine to come up
	I0819 18:20:46.961480   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:46.961991   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:46.962010   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:46.961956   67882 retry.go:31] will retry after 2.912321409s: waiting for machine to come up
	I0819 18:20:49.877938   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:49.878388   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:49.878414   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:49.878347   67882 retry.go:31] will retry after 4.020459132s: waiting for machine to come up
	I0819 18:20:53.901782   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.902239   66229 main.go:141] libmachine: (embed-certs-306581) Found IP for machine: 192.168.72.181
	I0819 18:20:53.902260   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has current primary IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.902266   66229 main.go:141] libmachine: (embed-certs-306581) Reserving static IP address...
	I0819 18:20:53.902757   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "embed-certs-306581", mac: "52:54:00:a4:c5:6a", ip: "192.168.72.181"} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:53.902779   66229 main.go:141] libmachine: (embed-certs-306581) DBG | skip adding static IP to network mk-embed-certs-306581 - found existing host DHCP lease matching {name: "embed-certs-306581", mac: "52:54:00:a4:c5:6a", ip: "192.168.72.181"}
	I0819 18:20:53.902789   66229 main.go:141] libmachine: (embed-certs-306581) Reserved static IP address: 192.168.72.181
	I0819 18:20:53.902800   66229 main.go:141] libmachine: (embed-certs-306581) Waiting for SSH to be available...
	I0819 18:20:53.902808   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Getting to WaitForSSH function...
	I0819 18:20:53.904907   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.905284   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:53.905316   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.905407   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Using SSH client type: external
	I0819 18:20:53.905434   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa (-rw-------)
	I0819 18:20:53.905466   66229 main.go:141] libmachine: (embed-certs-306581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:20:53.905481   66229 main.go:141] libmachine: (embed-certs-306581) DBG | About to run SSH command:
	I0819 18:20:53.905493   66229 main.go:141] libmachine: (embed-certs-306581) DBG | exit 0
	I0819 18:20:54.024614   66229 main.go:141] libmachine: (embed-certs-306581) DBG | SSH cmd err, output: <nil>: 
	I0819 18:20:54.024991   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetConfigRaw
	I0819 18:20:54.025614   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:54.028496   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.028901   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.028935   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.029207   66229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/config.json ...
	I0819 18:20:54.029412   66229 machine.go:93] provisionDockerMachine start ...
	I0819 18:20:54.029430   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:54.029630   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.032073   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.032436   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.032463   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.032647   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.032822   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.033002   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.033136   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.033284   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.033483   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.033498   66229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:20:54.132908   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 18:20:54.132938   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.133214   66229 buildroot.go:166] provisioning hostname "embed-certs-306581"
	I0819 18:20:54.133238   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.133426   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.135967   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.136324   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.136356   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.136507   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.136713   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.136873   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.137028   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.137215   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.137423   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.137437   66229 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-306581 && echo "embed-certs-306581" | sudo tee /etc/hostname
	I0819 18:20:54.250819   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-306581
	
	I0819 18:20:54.250849   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.253776   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.254119   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.254150   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.254351   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.254574   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.254718   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.254872   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.255090   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.255269   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.255286   66229 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-306581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-306581/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-306581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:20:54.361268   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:20:54.361300   66229 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 18:20:54.361328   66229 buildroot.go:174] setting up certificates
	I0819 18:20:54.361342   66229 provision.go:84] configureAuth start
	I0819 18:20:54.361359   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.361630   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:54.364099   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.364511   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.364541   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.364666   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.366912   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.367301   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.367329   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.367447   66229 provision.go:143] copyHostCerts
	I0819 18:20:54.367496   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 18:20:54.367515   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 18:20:54.367586   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 18:20:54.367687   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 18:20:54.367699   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 18:20:54.367737   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 18:20:54.367824   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 18:20:54.367834   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 18:20:54.367860   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 18:20:54.367919   66229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.embed-certs-306581 san=[127.0.0.1 192.168.72.181 embed-certs-306581 localhost minikube]
	I0819 18:20:54.424019   66229 provision.go:177] copyRemoteCerts
	I0819 18:20:54.424075   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:20:54.424096   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.426737   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.426994   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.427016   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.427171   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.427380   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.427523   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.427645   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:54.506517   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:20:54.530454   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 18:20:54.552740   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:20:54.574870   66229 provision.go:87] duration metric: took 213.51055ms to configureAuth
	I0819 18:20:54.574904   66229 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:20:54.575077   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:20:54.575213   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.577946   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.578283   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.578312   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.578484   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.578683   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.578878   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.578993   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.579122   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.579267   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.579281   66229 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:20:54.825788   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:20:54.825815   66229 machine.go:96] duration metric: took 796.390996ms to provisionDockerMachine
	I0819 18:20:54.825826   66229 start.go:293] postStartSetup for "embed-certs-306581" (driver="kvm2")
	I0819 18:20:54.825836   66229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:20:54.825850   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:54.826187   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:20:54.826214   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.829048   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.829433   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.829462   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.829582   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.829819   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.829963   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.830093   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:54.911609   66229 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:20:54.915894   66229 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:20:54.915916   66229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 18:20:54.915979   66229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 18:20:54.916049   66229 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 18:20:54.916134   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:20:54.926185   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:20:54.952362   66229 start.go:296] duration metric: took 126.500839ms for postStartSetup
	I0819 18:20:54.952401   66229 fix.go:56] duration metric: took 19.639145598s for fixHost
	I0819 18:20:54.952420   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.955522   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.955881   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.955909   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.956078   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.956270   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.956450   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.956605   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.956785   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.956940   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.956950   66229 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:20:55.053204   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724091655.030704823
	
	I0819 18:20:55.053229   66229 fix.go:216] guest clock: 1724091655.030704823
	I0819 18:20:55.053237   66229 fix.go:229] Guest: 2024-08-19 18:20:55.030704823 +0000 UTC Remote: 2024-08-19 18:20:54.952405352 +0000 UTC m=+302.228892640 (delta=78.299471ms)
	I0819 18:20:55.053254   66229 fix.go:200] guest clock delta is within tolerance: 78.299471ms
	I0819 18:20:55.053261   66229 start.go:83] releasing machines lock for "embed-certs-306581", held for 19.740028573s
	I0819 18:20:55.053277   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.053530   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:55.056146   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.056523   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.056546   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.056677   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057135   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057320   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057404   66229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:20:55.057445   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:55.057497   66229 ssh_runner.go:195] Run: cat /version.json
	I0819 18:20:55.057518   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:55.059944   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.059969   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060265   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.060296   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060359   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.060407   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060416   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:55.060528   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:55.060605   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:55.060672   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:55.060778   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:55.060838   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:55.060899   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:55.060941   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:55.183438   66229 ssh_runner.go:195] Run: systemctl --version
	I0819 18:20:55.189341   66229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:20:55.330628   66229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:20:55.336807   66229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:20:55.336877   66229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:20:55.351865   66229 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:20:55.351893   66229 start.go:495] detecting cgroup driver to use...
	I0819 18:20:55.351988   66229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:20:55.368983   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:20:55.382795   66229 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:20:55.382848   66229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:20:55.396175   66229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:20:55.409333   66229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:20:55.534054   66229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:20:55.685410   66229 docker.go:233] disabling docker service ...
	I0819 18:20:55.685483   66229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:20:55.699743   66229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:20:55.712425   66229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:20:55.842249   66229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:20:55.964126   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:20:55.978354   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:20:55.995963   66229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:20:55.996032   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.006717   66229 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:20:56.006810   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.017350   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.027098   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.037336   66229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:20:56.047188   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.059128   66229 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.076950   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.087819   66229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:20:56.097922   66229 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:20:56.097980   66229 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:20:56.114569   66229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:20:56.130215   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:20:56.243812   66229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:20:56.376166   66229 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:20:56.376294   66229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:20:56.380916   66229 start.go:563] Will wait 60s for crictl version
	I0819 18:20:56.380973   66229 ssh_runner.go:195] Run: which crictl
	I0819 18:20:56.384492   66229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:20:56.421992   66229 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:20:56.422058   66229 ssh_runner.go:195] Run: crio --version
	I0819 18:20:56.448657   66229 ssh_runner.go:195] Run: crio --version
	I0819 18:20:56.477627   66229 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:20:56.479098   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:56.482364   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:56.482757   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:56.482800   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:56.482997   66229 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 18:20:56.486798   66229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:20:56.498662   66229 kubeadm.go:883] updating cluster {Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:20:56.498820   66229 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:20:56.498890   66229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:20:56.534076   66229 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 18:20:56.534137   66229 ssh_runner.go:195] Run: which lz4
	I0819 18:20:56.537906   66229 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:20:56.541691   66229 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:20:56.541726   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 18:20:57.728202   66229 crio.go:462] duration metric: took 1.190335452s to copy over tarball
	I0819 18:20:57.728263   66229 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:20:59.870389   66229 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.142096936s)
	I0819 18:20:59.870434   66229 crio.go:469] duration metric: took 2.142210052s to extract the tarball
	I0819 18:20:59.870443   66229 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:20:59.907013   66229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:20:59.949224   66229 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:20:59.949244   66229 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:20:59.949257   66229 kubeadm.go:934] updating node { 192.168.72.181 8443 v1.31.0 crio true true} ...
	I0819 18:20:59.949790   66229 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-306581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:20:59.949851   66229 ssh_runner.go:195] Run: crio config
	I0819 18:20:59.993491   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:20:59.993521   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:20:59.993535   66229 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:20:59.993561   66229 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.181 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-306581 NodeName:embed-certs-306581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:20:59.993735   66229 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-306581"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:20:59.993814   66229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:21:00.003488   66229 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:21:00.003563   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:21:00.012546   66229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0819 18:21:00.028546   66229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:21:00.044037   66229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0819 18:21:00.059422   66229 ssh_runner.go:195] Run: grep 192.168.72.181	control-plane.minikube.internal$ /etc/hosts
	I0819 18:21:00.062992   66229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:21:00.075172   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:21:00.213050   66229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:21:00.230086   66229 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581 for IP: 192.168.72.181
	I0819 18:21:00.230114   66229 certs.go:194] generating shared ca certs ...
	I0819 18:21:00.230135   66229 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:21:00.230303   66229 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 18:21:00.230371   66229 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 18:21:00.230386   66229 certs.go:256] generating profile certs ...
	I0819 18:21:00.230506   66229 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/client.key
	I0819 18:21:00.230593   66229 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.key.cf6a9e5e
	I0819 18:21:00.230652   66229 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.key
	I0819 18:21:00.230819   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 18:21:00.230863   66229 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 18:21:00.230877   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:21:00.230912   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:21:00.230951   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:21:00.230985   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 18:21:00.231053   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:21:00.231968   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:21:00.265793   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:21:00.292911   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:21:00.333617   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:21:00.361258   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 18:21:00.394711   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:21:00.417880   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:21:00.440771   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:21:00.464416   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 18:21:00.489641   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 18:21:00.512135   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:21:00.535608   66229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:21:00.552131   66229 ssh_runner.go:195] Run: openssl version
	I0819 18:21:00.557821   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 18:21:00.568710   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.573178   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.573239   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.578820   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:21:00.589649   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:21:00.600652   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.604986   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.605049   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.610552   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:21:00.620514   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 18:21:00.630217   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.634541   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.634599   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.639839   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 18:21:00.649821   66229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:21:00.654288   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:21:00.660071   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:21:00.665354   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:21:00.670791   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:21:00.676451   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:21:00.682099   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:21:00.687792   66229 kubeadm.go:392] StartCluster: {Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:21:00.687869   66229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:21:00.687914   66229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:21:00.730692   66229 cri.go:89] found id: ""
	I0819 18:21:00.730762   66229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:21:00.740607   66229 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 18:21:00.740627   66229 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 18:21:00.740687   66229 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 18:21:00.750127   66229 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:21:00.751927   66229 kubeconfig.go:125] found "embed-certs-306581" server: "https://192.168.72.181:8443"
	I0819 18:21:00.754865   66229 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 18:21:00.764102   66229 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.181
	I0819 18:21:00.764130   66229 kubeadm.go:1160] stopping kube-system containers ...
	I0819 18:21:00.764142   66229 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 18:21:00.764210   66229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:21:00.797866   66229 cri.go:89] found id: ""
	I0819 18:21:00.797939   66229 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 18:21:00.815065   66229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:21:00.824279   66229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:21:00.824297   66229 kubeadm.go:157] found existing configuration files:
	
	I0819 18:21:00.824336   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:21:00.832688   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:21:00.832766   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:21:00.841795   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:21:00.852300   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:21:00.852358   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:21:00.862973   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:21:00.873195   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:21:00.873243   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:21:00.882559   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:21:00.892687   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:21:00.892774   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:21:00.903746   66229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:21:00.913161   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:01.017511   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:01.829503   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.047620   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.105126   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.157817   66229 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:21:02.157927   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:02.658716   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:03.158468   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:03.658865   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:04.157979   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:04.175682   66229 api_server.go:72] duration metric: took 2.017872037s to wait for apiserver process to appear ...
	I0819 18:21:04.175711   66229 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:21:04.175731   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.251226   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 18:21:07.251253   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 18:21:07.251265   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.290762   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 18:21:07.290788   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 18:21:07.676347   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.695167   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:21:07.695220   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:21:08.176382   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:08.183772   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:21:08.183816   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:21:08.676435   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:08.680898   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 200:
	ok
	I0819 18:21:08.686996   66229 api_server.go:141] control plane version: v1.31.0
	I0819 18:21:08.687023   66229 api_server.go:131] duration metric: took 4.511304673s to wait for apiserver health ...
	I0819 18:21:08.687031   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:21:08.687037   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:21:08.688988   66229 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:21:08.690213   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:21:08.701051   66229 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:21:08.719754   66229 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:21:08.732139   66229 system_pods.go:59] 8 kube-system pods found
	I0819 18:21:08.732172   66229 system_pods.go:61] "coredns-6f6b679f8f-222n6" [1d55fb75-011d-4517-8601-b55ff22d0fe1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 18:21:08.732179   66229 system_pods.go:61] "etcd-embed-certs-306581" [0b299b0b-00ec-45d6-9e5f-6f8677734138] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 18:21:08.732187   66229 system_pods.go:61] "kube-apiserver-embed-certs-306581" [c0342f0d-3e9b-4118-abcb-e6585ec8205c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 18:21:08.732192   66229 system_pods.go:61] "kube-controller-manager-embed-certs-306581" [3e8441b3-f3cc-4e0b-9e9b-2dc1fd41ca1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 18:21:08.732196   66229 system_pods.go:61] "kube-proxy-4vt6x" [559e4638-9505-4d7f-b84e-77b813c84ab4] Running
	I0819 18:21:08.732204   66229 system_pods.go:61] "kube-scheduler-embed-certs-306581" [39ec99a8-3e38-40f6-af5e-02a437573bd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 18:21:08.732210   66229 system_pods.go:61] "metrics-server-6867b74b74-dmpfh" [0edd2d8d-aa29-4817-babb-09e185fc0578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:21:08.732213   66229 system_pods.go:61] "storage-provisioner" [f267a05a-418f-49a9-b09d-a6330ffa4abf] Running
	I0819 18:21:08.732219   66229 system_pods.go:74] duration metric: took 12.445292ms to wait for pod list to return data ...
	I0819 18:21:08.732226   66229 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:21:08.735979   66229 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:21:08.736004   66229 node_conditions.go:123] node cpu capacity is 2
	I0819 18:21:08.736015   66229 node_conditions.go:105] duration metric: took 3.784963ms to run NodePressure ...
	I0819 18:21:08.736029   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:08.995746   66229 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 18:21:09.001567   66229 kubeadm.go:739] kubelet initialised
	I0819 18:21:09.001592   66229 kubeadm.go:740] duration metric: took 5.816928ms waiting for restarted kubelet to initialise ...
	I0819 18:21:09.001603   66229 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:21:09.006253   66229 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:11.015091   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:13.512551   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:15.512696   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:16.513342   66229 pod_ready.go:93] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:16.513387   66229 pod_ready.go:82] duration metric: took 7.507092015s for pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:16.513404   66229 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.519842   66229 pod_ready.go:93] pod "etcd-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:17.519864   66229 pod_ready.go:82] duration metric: took 1.006452738s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.519873   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.524383   66229 pod_ready.go:93] pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:17.524401   66229 pod_ready.go:82] duration metric: took 4.522465ms for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.524411   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:19.536012   66229 pod_ready.go:103] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:22.030530   66229 pod_ready.go:103] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:23.530792   66229 pod_ready.go:93] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.530818   66229 pod_ready.go:82] duration metric: took 6.006401322s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.530828   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4vt6x" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.535011   66229 pod_ready.go:93] pod "kube-proxy-4vt6x" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.535030   66229 pod_ready.go:82] duration metric: took 4.196825ms for pod "kube-proxy-4vt6x" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.535038   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.538712   66229 pod_ready.go:93] pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.538731   66229 pod_ready.go:82] duration metric: took 3.686091ms for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.538743   66229 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:25.545068   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:28.044531   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:30.044724   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:32.545647   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:35.044620   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:37.044937   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:39.045319   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:41.545155   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:43.545946   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:46.045829   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:48.544436   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:50.546582   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:53.045122   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:55.544595   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:57.544701   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:00.044887   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:02.044950   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:04.544241   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:06.546130   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:09.044418   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:11.045634   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:13.545020   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:16.045408   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:18.544890   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:21.044294   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:23.045251   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:25.545598   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:27.545703   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:30.044377   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:32.045041   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:34.045316   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:36.045466   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:38.543870   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:40.544216   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:42.545271   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:45.044619   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:47.045364   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:49.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:51.045992   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:53.544682   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:56.045091   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:58.045324   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:00.046083   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:02.545541   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:05.045078   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:07.544235   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:09.545586   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:12.045449   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:14.545054   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:16.545253   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:19.044239   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:21.045012   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:23.045831   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:25.545703   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:28.045069   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:30.045417   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:32.545986   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:35.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:37.545427   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:39.545715   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:42.046173   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:44.545426   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:46.545560   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:48.546489   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:51.044803   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:53.044925   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:55.544871   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:57.545044   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:00.044157   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:02.045599   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:04.546054   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:07.044956   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:09.044993   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:11.045233   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:13.046097   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:15.046223   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:17.544258   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:19.545890   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:22.044892   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:24.045926   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:26.545100   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:29.044231   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:31.044942   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:33.545660   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:36.045482   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:38.545467   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:40.545731   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:43.045524   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:45.545299   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:48.044040   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:50.044556   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:52.046009   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:54.545370   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:57.044344   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:59.544590   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:02.045528   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:04.546831   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:07.045865   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:09.544718   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:12.044142   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:14.045777   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:16.048107   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:18.545087   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:21.044910   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:23.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:23.539885   66229 pod_ready.go:82] duration metric: took 4m0.001128118s for pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace to be "Ready" ...
	E0819 18:25:23.539910   66229 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 18:25:23.539927   66229 pod_ready.go:39] duration metric: took 4m14.538313663s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:25:23.539953   66229 kubeadm.go:597] duration metric: took 4m22.799312728s to restartPrimaryControlPlane
	W0819 18:25:23.540007   66229 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:25:23.540040   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:25:49.757089   66229 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.217024974s)
	I0819 18:25:49.757162   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:25:49.771550   66229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:25:49.780916   66229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:25:49.789732   66229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:25:49.789751   66229 kubeadm.go:157] found existing configuration files:
	
	I0819 18:25:49.789796   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:25:49.798373   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:25:49.798436   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:25:49.807148   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:25:49.815466   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:25:49.815528   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:25:49.824320   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:25:49.832472   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:25:49.832523   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:25:49.841050   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:25:49.849186   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:25:49.849243   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:25:49.857711   66229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:25:49.904029   66229 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:25:49.904211   66229 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:25:50.021095   66229 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:25:50.021242   66229 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:25:50.021399   66229 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:25:50.031925   66229 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:25:50.033989   66229 out.go:235]   - Generating certificates and keys ...
	I0819 18:25:50.034080   66229 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:25:50.034163   66229 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:25:50.034236   66229 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:25:50.034287   66229 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:25:50.034345   66229 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:25:50.034392   66229 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:25:50.034460   66229 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:25:50.034568   66229 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:25:50.034679   66229 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:25:50.034796   66229 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:25:50.034869   66229 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:25:50.034950   66229 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:25:50.135488   66229 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:25:50.189286   66229 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:25:50.602494   66229 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:25:50.752478   66229 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:25:51.009355   66229 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:25:51.009947   66229 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:25:51.012443   66229 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:25:51.014364   66229 out.go:235]   - Booting up control plane ...
	I0819 18:25:51.014506   66229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:25:51.014618   66229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:25:51.014884   66229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:25:51.033153   66229 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:25:51.040146   66229 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:25:51.040228   66229 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:25:51.167821   66229 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:25:51.167952   66229 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:25:52.171536   66229 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003657825s
	I0819 18:25:52.171661   66229 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:25:56.673902   66229 kubeadm.go:310] [api-check] The API server is healthy after 4.502200468s
	I0819 18:25:56.700202   66229 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:25:56.718381   66229 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:25:56.745000   66229 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:25:56.745278   66229 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-306581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:25:56.759094   66229 kubeadm.go:310] [bootstrap-token] Using token: abvjrz.7whl2a0axm001wrp
	I0819 18:25:56.760573   66229 out.go:235]   - Configuring RBAC rules ...
	I0819 18:25:56.760724   66229 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:25:56.766575   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:25:56.780740   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:25:56.784467   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:25:56.788245   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:25:56.792110   66229 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:25:57.088316   66229 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:25:57.528128   66229 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:25:58.088280   66229 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:25:58.088324   66229 kubeadm.go:310] 
	I0819 18:25:58.088398   66229 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:25:58.088425   66229 kubeadm.go:310] 
	I0819 18:25:58.088559   66229 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:25:58.088585   66229 kubeadm.go:310] 
	I0819 18:25:58.088633   66229 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:25:58.088726   66229 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:25:58.088883   66229 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:25:58.088904   66229 kubeadm.go:310] 
	I0819 18:25:58.088983   66229 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:25:58.088996   66229 kubeadm.go:310] 
	I0819 18:25:58.089083   66229 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:25:58.089109   66229 kubeadm.go:310] 
	I0819 18:25:58.089185   66229 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:25:58.089294   66229 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:25:58.089419   66229 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:25:58.089441   66229 kubeadm.go:310] 
	I0819 18:25:58.089557   66229 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:25:58.089669   66229 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:25:58.089681   66229 kubeadm.go:310] 
	I0819 18:25:58.089798   66229 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token abvjrz.7whl2a0axm001wrp \
	I0819 18:25:58.089961   66229 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:25:58.089995   66229 kubeadm.go:310] 	--control-plane 
	I0819 18:25:58.090005   66229 kubeadm.go:310] 
	I0819 18:25:58.090134   66229 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:25:58.090146   66229 kubeadm.go:310] 
	I0819 18:25:58.090270   66229 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token abvjrz.7whl2a0axm001wrp \
	I0819 18:25:58.090418   66229 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:25:58.091186   66229 kubeadm.go:310] W0819 18:25:49.877896    2533 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:25:58.091610   66229 kubeadm.go:310] W0819 18:25:49.879026    2533 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:25:58.091792   66229 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:25:58.091814   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:25:58.091824   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:25:58.093554   66229 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:25:58.094739   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:25:58.105125   66229 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:25:58.123435   66229 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:25:58.123526   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:58.123532   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-306581 minikube.k8s.io/updated_at=2024_08_19T18_25_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=embed-certs-306581 minikube.k8s.io/primary=true
	I0819 18:25:58.148101   66229 ops.go:34] apiserver oom_adj: -16
	I0819 18:25:58.298505   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:58.799549   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:59.299523   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:59.798660   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:00.299282   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:00.799040   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:01.298647   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:01.798822   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:02.299035   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:02.798965   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:02.914076   66229 kubeadm.go:1113] duration metric: took 4.790608101s to wait for elevateKubeSystemPrivileges
	I0819 18:26:02.914111   66229 kubeadm.go:394] duration metric: took 5m2.226323065s to StartCluster
	I0819 18:26:02.914132   66229 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:26:02.914214   66229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:26:02.915798   66229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:26:02.916048   66229 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:26:02.916134   66229 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:26:02.916258   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:26:02.916269   66229 addons.go:69] Setting default-storageclass=true in profile "embed-certs-306581"
	I0819 18:26:02.916257   66229 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-306581"
	I0819 18:26:02.916310   66229 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-306581"
	I0819 18:26:02.916342   66229 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-306581"
	I0819 18:26:02.916344   66229 addons.go:69] Setting metrics-server=true in profile "embed-certs-306581"
	W0819 18:26:02.916356   66229 addons.go:243] addon storage-provisioner should already be in state true
	I0819 18:26:02.916376   66229 addons.go:234] Setting addon metrics-server=true in "embed-certs-306581"
	I0819 18:26:02.916382   66229 host.go:66] Checking if "embed-certs-306581" exists ...
	W0819 18:26:02.916389   66229 addons.go:243] addon metrics-server should already be in state true
	I0819 18:26:02.916427   66229 host.go:66] Checking if "embed-certs-306581" exists ...
	I0819 18:26:02.916763   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.916775   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.916792   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.916805   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.916827   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.916852   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.918733   66229 out.go:177] * Verifying Kubernetes components...
	I0819 18:26:02.920207   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:26:02.936535   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I0819 18:26:02.936877   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41383
	I0819 18:26:02.937025   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39935
	I0819 18:26:02.937128   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.937375   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.937485   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.937675   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.937698   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.937939   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.937951   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.937960   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.937965   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.938038   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.938285   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.938328   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.938442   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.938611   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.938640   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.938821   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.938859   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.942730   66229 addons.go:234] Setting addon default-storageclass=true in "embed-certs-306581"
	W0819 18:26:02.942783   66229 addons.go:243] addon default-storageclass should already be in state true
	I0819 18:26:02.942825   66229 host.go:66] Checking if "embed-certs-306581" exists ...
	I0819 18:26:02.945808   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.945841   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.959554   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I0819 18:26:02.959555   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0819 18:26:02.959950   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.960062   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.960479   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.960499   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.960634   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.960650   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.960793   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I0819 18:26:02.960976   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.961044   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.961090   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.961157   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.961205   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.961550   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.961571   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.961889   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.962444   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.962471   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.963100   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:26:02.963295   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:26:02.965320   66229 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:26:02.965389   66229 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 18:26:02.966795   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 18:26:02.966816   66229 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 18:26:02.966835   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:26:02.966935   66229 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:26:02.966956   66229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:26:02.966975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:26:02.970428   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.970527   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.970751   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:26:02.970771   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.971025   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:26:02.971047   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.971053   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:26:02.971198   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:26:02.971210   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:26:02.971364   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:26:02.971407   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:26:02.971526   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:26:02.971577   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:26:02.971704   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:26:02.978868   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0819 18:26:02.979249   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.979716   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.979734   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.980120   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.980329   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.982092   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:26:02.982322   66229 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:26:02.982337   66229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:26:02.982356   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:26:02.984740   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.985154   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:26:02.985175   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.985411   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:26:02.985583   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:26:02.985734   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:26:02.985861   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:26:03.159722   66229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:26:03.200632   66229 node_ready.go:35] waiting up to 6m0s for node "embed-certs-306581" to be "Ready" ...
	I0819 18:26:03.208989   66229 node_ready.go:49] node "embed-certs-306581" has status "Ready":"True"
	I0819 18:26:03.209020   66229 node_ready.go:38] duration metric: took 8.358821ms for node "embed-certs-306581" to be "Ready" ...
	I0819 18:26:03.209031   66229 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:26:03.215374   66229 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:03.293861   66229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:26:03.295078   66229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:26:03.362999   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 18:26:03.363021   66229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 18:26:03.455443   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 18:26:03.455471   66229 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 18:26:03.525137   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:26:03.525167   66229 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 18:26:03.594219   66229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:26:03.707027   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:03.707054   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:03.707419   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:03.707510   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:03.707526   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:03.707540   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:03.707551   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:03.707815   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:03.707863   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:03.707866   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:03.731452   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:03.731476   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:03.731752   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:03.731766   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:03.731774   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.521921   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.521943   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.522255   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:04.522325   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.522338   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.522347   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.522369   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.522422   66229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227312769s)
	I0819 18:26:04.522461   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.522472   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.522548   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.522564   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.522574   66229 addons.go:475] Verifying addon metrics-server=true in "embed-certs-306581"
	I0819 18:26:04.523854   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:04.523859   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.523882   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.523899   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.523911   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.524115   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.524134   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.525754   66229 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0819 18:26:04.527292   66229 addons.go:510] duration metric: took 1.611171518s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0819 18:26:05.222505   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace has status "Ready":"False"
	I0819 18:26:06.222480   66229 pod_ready.go:93] pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.222511   66229 pod_ready.go:82] duration metric: took 3.00710581s for pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.222523   66229 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-j764j" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.229629   66229 pod_ready.go:93] pod "coredns-6f6b679f8f-j764j" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.229653   66229 pod_ready.go:82] duration metric: took 7.122956ms for pod "coredns-6f6b679f8f-j764j" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.229663   66229 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.234474   66229 pod_ready.go:93] pod "etcd-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.234497   66229 pod_ready.go:82] duration metric: took 4.828007ms for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.234510   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.239097   66229 pod_ready.go:93] pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.239114   66229 pod_ready.go:82] duration metric: took 4.596493ms for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.239123   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.745125   66229 pod_ready.go:93] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.745148   66229 pod_ready.go:82] duration metric: took 506.019468ms for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.745160   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-df5kf" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.019557   66229 pod_ready.go:93] pod "kube-proxy-df5kf" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:07.019594   66229 pod_ready.go:82] duration metric: took 274.427237ms for pod "kube-proxy-df5kf" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.019608   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.418650   66229 pod_ready.go:93] pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:07.418675   66229 pod_ready.go:82] duration metric: took 399.060317ms for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.418683   66229 pod_ready.go:39] duration metric: took 4.209640554s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:26:07.418696   66229 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:26:07.418742   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:26:07.434205   66229 api_server.go:72] duration metric: took 4.518122629s to wait for apiserver process to appear ...
	I0819 18:26:07.434229   66229 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:26:07.434245   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:26:07.438540   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 200:
	ok
	I0819 18:26:07.439633   66229 api_server.go:141] control plane version: v1.31.0
	I0819 18:26:07.439654   66229 api_server.go:131] duration metric: took 5.418424ms to wait for apiserver health ...
	I0819 18:26:07.439664   66229 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:26:07.622538   66229 system_pods.go:59] 9 kube-system pods found
	I0819 18:26:07.622567   66229 system_pods.go:61] "coredns-6f6b679f8f-274qq" [af408da7-683b-4730-b836-a5ae446e84d4] Running
	I0819 18:26:07.622575   66229 system_pods.go:61] "coredns-6f6b679f8f-j764j" [726e772d-dd20-4427-b8b2-40422b5be1ef] Running
	I0819 18:26:07.622580   66229 system_pods.go:61] "etcd-embed-certs-306581" [291235bc-9e42-4982-93c4-d77a0116a9ed] Running
	I0819 18:26:07.622583   66229 system_pods.go:61] "kube-apiserver-embed-certs-306581" [2068ba5f-ea2d-4b99-87e4-2c9d16861cd4] Running
	I0819 18:26:07.622587   66229 system_pods.go:61] "kube-controller-manager-embed-certs-306581" [057adac9-1819-4c28-8bdd-4b95cf4dd33f] Running
	I0819 18:26:07.622590   66229 system_pods.go:61] "kube-proxy-df5kf" [0f004f8f-d49f-468e-acac-a7d691c9cdba] Running
	I0819 18:26:07.622594   66229 system_pods.go:61] "kube-scheduler-embed-certs-306581" [58a0610a-0718-4151-8e0b-bf9dd0e7864a] Running
	I0819 18:26:07.622600   66229 system_pods.go:61] "metrics-server-6867b74b74-j8qbw" [6c7ec046-01e2-4903-9937-c79aabc81bb2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:26:07.622604   66229 system_pods.go:61] "storage-provisioner" [26d63f30-45fd-48f4-973e-6a72cf931b9d] Running
	I0819 18:26:07.622611   66229 system_pods.go:74] duration metric: took 182.941942ms to wait for pod list to return data ...
	I0819 18:26:07.622619   66229 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:26:07.820899   66229 default_sa.go:45] found service account: "default"
	I0819 18:26:07.820924   66229 default_sa.go:55] duration metric: took 198.300082ms for default service account to be created ...
	I0819 18:26:07.820934   66229 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:26:08.021777   66229 system_pods.go:86] 9 kube-system pods found
	I0819 18:26:08.021803   66229 system_pods.go:89] "coredns-6f6b679f8f-274qq" [af408da7-683b-4730-b836-a5ae446e84d4] Running
	I0819 18:26:08.021809   66229 system_pods.go:89] "coredns-6f6b679f8f-j764j" [726e772d-dd20-4427-b8b2-40422b5be1ef] Running
	I0819 18:26:08.021813   66229 system_pods.go:89] "etcd-embed-certs-306581" [291235bc-9e42-4982-93c4-d77a0116a9ed] Running
	I0819 18:26:08.021817   66229 system_pods.go:89] "kube-apiserver-embed-certs-306581" [2068ba5f-ea2d-4b99-87e4-2c9d16861cd4] Running
	I0819 18:26:08.021820   66229 system_pods.go:89] "kube-controller-manager-embed-certs-306581" [057adac9-1819-4c28-8bdd-4b95cf4dd33f] Running
	I0819 18:26:08.021825   66229 system_pods.go:89] "kube-proxy-df5kf" [0f004f8f-d49f-468e-acac-a7d691c9cdba] Running
	I0819 18:26:08.021829   66229 system_pods.go:89] "kube-scheduler-embed-certs-306581" [58a0610a-0718-4151-8e0b-bf9dd0e7864a] Running
	I0819 18:26:08.021836   66229 system_pods.go:89] "metrics-server-6867b74b74-j8qbw" [6c7ec046-01e2-4903-9937-c79aabc81bb2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:26:08.021840   66229 system_pods.go:89] "storage-provisioner" [26d63f30-45fd-48f4-973e-6a72cf931b9d] Running
	I0819 18:26:08.021847   66229 system_pods.go:126] duration metric: took 200.908452ms to wait for k8s-apps to be running ...
	I0819 18:26:08.021853   66229 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:26:08.021896   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:26:08.035873   66229 system_svc.go:56] duration metric: took 14.008336ms WaitForService to wait for kubelet
	I0819 18:26:08.035902   66229 kubeadm.go:582] duration metric: took 5.119824696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:26:08.035928   66229 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:26:08.219981   66229 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:26:08.220005   66229 node_conditions.go:123] node cpu capacity is 2
	I0819 18:26:08.220016   66229 node_conditions.go:105] duration metric: took 184.083094ms to run NodePressure ...
	I0819 18:26:08.220025   66229 start.go:241] waiting for startup goroutines ...
	I0819 18:26:08.220032   66229 start.go:246] waiting for cluster config update ...
	I0819 18:26:08.220041   66229 start.go:255] writing updated cluster config ...
	I0819 18:26:08.220295   66229 ssh_runner.go:195] Run: rm -f paused
	I0819 18:26:08.267438   66229 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:26:08.269435   66229 out.go:177] * Done! kubectl is now configured to use "embed-certs-306581" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.525825738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092149525789657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8b8ef78-ea46-4fa5-9ee8-4872a8531cf4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.526364346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19911dfc-bfb2-4ab4-b031-dd1346f1af19 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.526471353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19911dfc-bfb2-4ab4-b031-dd1346f1af19 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.526504311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=19911dfc-bfb2-4ab4-b031-dd1346f1af19 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.556857184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c66d1a14-0042-45a3-853c-d674999e7700 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.556941363Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c66d1a14-0042-45a3-853c-d674999e7700 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.558040980Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79481e28-753e-4547-8ff8-7b5ee15edbd2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.558519287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092149558495698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79481e28-753e-4547-8ff8-7b5ee15edbd2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.559201492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31d0a59e-5225-4f41-ab3e-7d6e141f4334 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.559252184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31d0a59e-5225-4f41-ab3e-7d6e141f4334 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.559296148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=31d0a59e-5225-4f41-ab3e-7d6e141f4334 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.588807748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=461a6a61-7f0a-4eb4-87c8-fe2739b48dca name=/runtime.v1.RuntimeService/Version
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.588901835Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=461a6a61-7f0a-4eb4-87c8-fe2739b48dca name=/runtime.v1.RuntimeService/Version
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.590158408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8c21136-0538-4497-9815-730ba1a48903 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.590584751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092149590560827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8c21136-0538-4497-9815-730ba1a48903 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.591380885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=743e8e99-52c3-4d5a-a24d-6486619a230c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.591478577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=743e8e99-52c3-4d5a-a24d-6486619a230c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.591516860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=743e8e99-52c3-4d5a-a24d-6486619a230c name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.627170531Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2bfc0bc3-661e-4560-9984-0911bf62e241 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.627317501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2bfc0bc3-661e-4560-9984-0911bf62e241 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.628806402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5a9cc78-2389-4c7c-bf11-9850577285ed name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.629349162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092149629320278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5a9cc78-2389-4c7c-bf11-9850577285ed name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.630162212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc0124fb-d059-40b1-8064-beedf34c6008 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.630234962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc0124fb-d059-40b1-8064-beedf34c6008 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:29:09 old-k8s-version-079123 crio[645]: time="2024-08-19 18:29:09.630289493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bc0124fb-d059-40b1-8064-beedf34c6008 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug19 18:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050661] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037961] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.796045] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.906924] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.551301] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.289032] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.062660] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073191] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.227214] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.148485] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.242620] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[Aug19 18:12] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.058214] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.166270] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	[ +11.850102] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 18:16] systemd-fstab-generator[5127]: Ignoring "noauto" option for root device
	[Aug19 18:18] systemd-fstab-generator[5400]: Ignoring "noauto" option for root device
	[  +0.061151] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:29:09 up 17 min,  0 users,  load average: 0.07, 0.08, 0.05
	Linux old-k8s-version-079123 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000332cf0)
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]: goroutine 157 [select]:
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00094fef0, 0x4f0ac20, 0xc000c2f360, 0x1, 0xc00009e0c0)
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001dc540, 0xc00009e0c0)
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c23270, 0xc0002c19e0)
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 19 18:29:06 old-k8s-version-079123 kubelet[6575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 19 18:29:06 old-k8s-version-079123 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 19 18:29:06 old-k8s-version-079123 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 19 18:29:07 old-k8s-version-079123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 19 18:29:07 old-k8s-version-079123 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 19 18:29:07 old-k8s-version-079123 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 19 18:29:07 old-k8s-version-079123 kubelet[6584]: I0819 18:29:07.491780    6584 server.go:416] Version: v1.20.0
	Aug 19 18:29:07 old-k8s-version-079123 kubelet[6584]: I0819 18:29:07.492134    6584 server.go:837] Client rotation is on, will bootstrap in background
	Aug 19 18:29:07 old-k8s-version-079123 kubelet[6584]: I0819 18:29:07.495101    6584 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 19 18:29:07 old-k8s-version-079123 kubelet[6584]: I0819 18:29:07.496275    6584 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 19 18:29:07 old-k8s-version-079123 kubelet[6584]: W0819 18:29:07.496309    6584 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079123 -n old-k8s-version-079123
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 2 (226.259317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-079123" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (430.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-19 18:32:27.881845464 +0000 UTC m=+6009.586013357
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-813424 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-813424 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.531µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-813424 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-813424 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-813424 logs -n 25: (1.195935423s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-233045             | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079123        | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233045                  | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-813424       | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:16 UTC |
	|         | default-k8s-diff-port-813424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079123             | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-233045 image list                           | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-814719 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | disable-driver-mounts-814719                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-306581            | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC | 19 Aug 24 18:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-306581                 | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC | 19 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:32 UTC | 19 Aug 24 18:32 UTC |
	| start   | -p auto-321572 --memory=3072                           | auto-321572                  | jenkins | v1.33.1 | 19 Aug 24 18:32 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-233969                                   | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:32 UTC | 19 Aug 24 18:32 UTC |
	| start   | -p kindnet-321572                                      | kindnet-321572               | jenkins | v1.33.1 | 19 Aug 24 18:32 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:32:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:32:10.413146   71252 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:32:10.413311   71252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:32:10.413321   71252 out.go:358] Setting ErrFile to fd 2...
	I0819 18:32:10.413326   71252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:32:10.413519   71252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:32:10.414107   71252 out.go:352] Setting JSON to false
	I0819 18:32:10.415096   71252 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8075,"bootTime":1724084255,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:32:10.415152   71252 start.go:139] virtualization: kvm guest
	I0819 18:32:10.417243   71252 out.go:177] * [kindnet-321572] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:32:10.418416   71252 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:32:10.418418   71252 notify.go:220] Checking for updates...
	I0819 18:32:10.421379   71252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:32:10.422809   71252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:32:10.424066   71252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:32:10.425272   71252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:32:10.426466   71252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:32:10.428190   71252 config.go:182] Loaded profile config "auto-321572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:32:10.428293   71252 config.go:182] Loaded profile config "default-k8s-diff-port-813424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:32:10.428370   71252 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:32:10.428462   71252 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:32:10.466560   71252 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:32:10.467948   71252 start.go:297] selected driver: kvm2
	I0819 18:32:10.467969   71252 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:32:10.467980   71252 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:32:10.468888   71252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:32:10.468988   71252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:32:10.485718   71252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:32:10.485769   71252 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:32:10.486028   71252 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:32:10.486110   71252 cni.go:84] Creating CNI manager for "kindnet"
	I0819 18:32:10.486126   71252 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 18:32:10.486188   71252 start.go:340] cluster config:
	{Name:kindnet-321572 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:32:10.486304   71252 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:32:10.488138   71252 out.go:177] * Starting "kindnet-321572" primary control-plane node in "kindnet-321572" cluster
	I0819 18:32:10.936786   70919 main.go:141] libmachine: (auto-321572) DBG | domain auto-321572 has defined MAC address 52:54:00:c9:b9:8d in network mk-auto-321572
	I0819 18:32:10.937415   70919 main.go:141] libmachine: (auto-321572) DBG | unable to find current IP address of domain auto-321572 in network mk-auto-321572
	I0819 18:32:10.937440   70919 main.go:141] libmachine: (auto-321572) DBG | I0819 18:32:10.937342   70942 retry.go:31] will retry after 1.089634383s: waiting for machine to come up
	I0819 18:32:12.028441   70919 main.go:141] libmachine: (auto-321572) DBG | domain auto-321572 has defined MAC address 52:54:00:c9:b9:8d in network mk-auto-321572
	I0819 18:32:12.029114   70919 main.go:141] libmachine: (auto-321572) DBG | unable to find current IP address of domain auto-321572 in network mk-auto-321572
	I0819 18:32:12.029166   70919 main.go:141] libmachine: (auto-321572) DBG | I0819 18:32:12.029060   70942 retry.go:31] will retry after 1.362475014s: waiting for machine to come up
	I0819 18:32:13.392633   70919 main.go:141] libmachine: (auto-321572) DBG | domain auto-321572 has defined MAC address 52:54:00:c9:b9:8d in network mk-auto-321572
	I0819 18:32:13.393194   70919 main.go:141] libmachine: (auto-321572) DBG | unable to find current IP address of domain auto-321572 in network mk-auto-321572
	I0819 18:32:13.393221   70919 main.go:141] libmachine: (auto-321572) DBG | I0819 18:32:13.393146   70942 retry.go:31] will retry after 1.765945786s: waiting for machine to come up
	I0819 18:32:15.160904   70919 main.go:141] libmachine: (auto-321572) DBG | domain auto-321572 has defined MAC address 52:54:00:c9:b9:8d in network mk-auto-321572
	I0819 18:32:15.161462   70919 main.go:141] libmachine: (auto-321572) DBG | unable to find current IP address of domain auto-321572 in network mk-auto-321572
	I0819 18:32:15.161486   70919 main.go:141] libmachine: (auto-321572) DBG | I0819 18:32:15.161374   70942 retry.go:31] will retry after 2.514081621s: waiting for machine to come up
	I0819 18:32:10.489273   71252 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:32:10.489304   71252 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:32:10.489316   71252 cache.go:56] Caching tarball of preloaded images
	I0819 18:32:10.489411   71252 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:32:10.489424   71252 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:32:10.489528   71252 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/config.json ...
	I0819 18:32:10.489553   71252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/config.json: {Name:mk494e482133271160228673505b74bb7658f24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:32:10.489713   71252 start.go:360] acquireMachinesLock for kindnet-321572: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:32:17.678033   70919 main.go:141] libmachine: (auto-321572) DBG | domain auto-321572 has defined MAC address 52:54:00:c9:b9:8d in network mk-auto-321572
	I0819 18:32:17.678451   70919 main.go:141] libmachine: (auto-321572) DBG | unable to find current IP address of domain auto-321572 in network mk-auto-321572
	I0819 18:32:17.678479   70919 main.go:141] libmachine: (auto-321572) DBG | I0819 18:32:17.678405   70942 retry.go:31] will retry after 3.566865382s: waiting for machine to come up
	I0819 18:32:21.246608   70919 main.go:141] libmachine: (auto-321572) DBG | domain auto-321572 has defined MAC address 52:54:00:c9:b9:8d in network mk-auto-321572
	I0819 18:32:21.247083   70919 main.go:141] libmachine: (auto-321572) DBG | unable to find current IP address of domain auto-321572 in network mk-auto-321572
	I0819 18:32:21.247112   70919 main.go:141] libmachine: (auto-321572) DBG | I0819 18:32:21.247039   70942 retry.go:31] will retry after 3.367417226s: waiting for machine to come up
	I0819 18:32:24.618574   70919 main.go:141] libmachine: (auto-321572) DBG | domain auto-321572 has defined MAC address 52:54:00:c9:b9:8d in network mk-auto-321572
	I0819 18:32:24.619086   70919 main.go:141] libmachine: (auto-321572) DBG | unable to find current IP address of domain auto-321572 in network mk-auto-321572
	I0819 18:32:24.619109   70919 main.go:141] libmachine: (auto-321572) DBG | I0819 18:32:24.618978   70942 retry.go:31] will retry after 3.573047144s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.536482279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092348536461421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04ca42fc-b1e5-4fb1-9b04-c13d9ab5e624 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.537166699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b54fed68-4ba4-4361-a8c7-592773e4af44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.537219793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b54fed68-4ba4-4361-a8c7-592773e4af44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.537436896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091140340485700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9079ae27322395aa6d03df6f194c5a32b3895e4b42da68f5b6297d2db376d4a,PodSandboxId:92b342207b58a06c33af84e980e7a11badb7d4df61421f3fce2124ae3da43ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724091120060473682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eeb80fff-1a91-4f45-8a17-c66d1da6882f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355,PodSandboxId:256397ebb865f86b780b9dd52ff6e1d901cb949ec399c52157c681e5ec2ddbb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091117191314679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4jvnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d81201db-1102-436b-ac29-dd201584de2d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0,PodSandboxId:2c062223259f3567ba7b0844f4e476e33bd97963755f6fca0557ccf2f940280d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091109556723661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4x48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f5fe5-0
70e-419c-a9bb-5b95f7496717,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091109532726871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-
71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb,PodSandboxId:210d84764ce9ccaa24e3553d5d1d0ff84aee0fcf0adb810b494e7eb365e48fb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091105515152518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfe4aef9394218cdf84b96908c10b892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117,PodSandboxId:c1dd8bd99022f227c04ed00e33789c7ed99a4519de3505fd4847ca07e2aab70c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091105515649930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd10fbfb40091d8bfaeebfbc7ba
fa5e5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2,PodSandboxId:45504ed40a59ec46b898039a0c4979ee6b434cb2ec820414e886e559b1d29ad0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091105496296864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56ffb1c4e951022e35868f5704d1a
06a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784,PodSandboxId:068e579a79a5648f54ed1af6c7432fbc7c59bf46572587788e70a9e81f4609b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091105529383732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367a35f849068c9dd6ee7c2eb24f15c
9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b54fed68-4ba4-4361-a8c7-592773e4af44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.572729793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d78be95-37c4-4519-ba9b-62eec6179ab4 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.572815666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d78be95-37c4-4519-ba9b-62eec6179ab4 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.573970104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d814547-6933-4486-a481-25de420dba16 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.574469633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092348574431584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d814547-6933-4486-a481-25de420dba16 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.575295160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d37bc25-f595-457c-9390-caa0dc00c896 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.575368075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d37bc25-f595-457c-9390-caa0dc00c896 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.575582003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091140340485700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9079ae27322395aa6d03df6f194c5a32b3895e4b42da68f5b6297d2db376d4a,PodSandboxId:92b342207b58a06c33af84e980e7a11badb7d4df61421f3fce2124ae3da43ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724091120060473682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eeb80fff-1a91-4f45-8a17-c66d1da6882f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355,PodSandboxId:256397ebb865f86b780b9dd52ff6e1d901cb949ec399c52157c681e5ec2ddbb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091117191314679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4jvnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d81201db-1102-436b-ac29-dd201584de2d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0,PodSandboxId:2c062223259f3567ba7b0844f4e476e33bd97963755f6fca0557ccf2f940280d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091109556723661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4x48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f5fe5-0
70e-419c-a9bb-5b95f7496717,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091109532726871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-
71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb,PodSandboxId:210d84764ce9ccaa24e3553d5d1d0ff84aee0fcf0adb810b494e7eb365e48fb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091105515152518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfe4aef9394218cdf84b96908c10b892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117,PodSandboxId:c1dd8bd99022f227c04ed00e33789c7ed99a4519de3505fd4847ca07e2aab70c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091105515649930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd10fbfb40091d8bfaeebfbc7ba
fa5e5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2,PodSandboxId:45504ed40a59ec46b898039a0c4979ee6b434cb2ec820414e886e559b1d29ad0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091105496296864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56ffb1c4e951022e35868f5704d1a
06a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784,PodSandboxId:068e579a79a5648f54ed1af6c7432fbc7c59bf46572587788e70a9e81f4609b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091105529383732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367a35f849068c9dd6ee7c2eb24f15c
9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d37bc25-f595-457c-9390-caa0dc00c896 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.618722525Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc5e70fd-234c-4fb9-aca4-43a67138802b name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.618836724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc5e70fd-234c-4fb9-aca4-43a67138802b name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.620174226Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a5e189b-b45c-461a-8bf1-012f5b8cd413 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.620556430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092348620532964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a5e189b-b45c-461a-8bf1-012f5b8cd413 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.621083573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4af5e30-bfe4-4b83-8a6c-a5b34050f816 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.621143544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4af5e30-bfe4-4b83-8a6c-a5b34050f816 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.621339276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091140340485700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9079ae27322395aa6d03df6f194c5a32b3895e4b42da68f5b6297d2db376d4a,PodSandboxId:92b342207b58a06c33af84e980e7a11badb7d4df61421f3fce2124ae3da43ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724091120060473682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eeb80fff-1a91-4f45-8a17-c66d1da6882f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355,PodSandboxId:256397ebb865f86b780b9dd52ff6e1d901cb949ec399c52157c681e5ec2ddbb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091117191314679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4jvnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d81201db-1102-436b-ac29-dd201584de2d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0,PodSandboxId:2c062223259f3567ba7b0844f4e476e33bd97963755f6fca0557ccf2f940280d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091109556723661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4x48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f5fe5-0
70e-419c-a9bb-5b95f7496717,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091109532726871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-
71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb,PodSandboxId:210d84764ce9ccaa24e3553d5d1d0ff84aee0fcf0adb810b494e7eb365e48fb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091105515152518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfe4aef9394218cdf84b96908c10b892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117,PodSandboxId:c1dd8bd99022f227c04ed00e33789c7ed99a4519de3505fd4847ca07e2aab70c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091105515649930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd10fbfb40091d8bfaeebfbc7ba
fa5e5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2,PodSandboxId:45504ed40a59ec46b898039a0c4979ee6b434cb2ec820414e886e559b1d29ad0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091105496296864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56ffb1c4e951022e35868f5704d1a
06a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784,PodSandboxId:068e579a79a5648f54ed1af6c7432fbc7c59bf46572587788e70a9e81f4609b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091105529383732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367a35f849068c9dd6ee7c2eb24f15c
9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4af5e30-bfe4-4b83-8a6c-a5b34050f816 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.655828512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7097a65-ffd7-4426-99c3-fcb6c4525246 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.655896266Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7097a65-ffd7-4426-99c3-fcb6c4525246 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.656843565Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15931f91-854f-42e6-b852-9699b58b83b2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.657219328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092348657190700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15931f91-854f-42e6-b852-9699b58b83b2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.657803023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8aac268b-bc30-49e0-aa45-4a0d9eec0252 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.657851513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8aac268b-bc30-49e0-aa45-4a0d9eec0252 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:28 default-k8s-diff-port-813424 crio[732]: time="2024-08-19 18:32:28.658034720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091140340485700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9079ae27322395aa6d03df6f194c5a32b3895e4b42da68f5b6297d2db376d4a,PodSandboxId:92b342207b58a06c33af84e980e7a11badb7d4df61421f3fce2124ae3da43ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724091120060473682,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eeb80fff-1a91-4f45-8a17-c66d1da6882f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355,PodSandboxId:256397ebb865f86b780b9dd52ff6e1d901cb949ec399c52157c681e5ec2ddbb3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091117191314679,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4jvnz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d81201db-1102-436b-ac29-dd201584de2d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0,PodSandboxId:2c062223259f3567ba7b0844f4e476e33bd97963755f6fca0557ccf2f940280d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724091109556723661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j4x48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886f5fe5-0
70e-419c-a9bb-5b95f7496717,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a,PodSandboxId:aac9a42aaca679578b76b4aab32cf54a21ace04dd7579eb1d3cb37f8c5a3d6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724091109532726871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658a37e1-39b6-4fa9-8f23-
71518ebda8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb,PodSandboxId:210d84764ce9ccaa24e3553d5d1d0ff84aee0fcf0adb810b494e7eb365e48fb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091105515152518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: dfe4aef9394218cdf84b96908c10b892,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117,PodSandboxId:c1dd8bd99022f227c04ed00e33789c7ed99a4519de3505fd4847ca07e2aab70c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091105515649930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd10fbfb40091d8bfaeebfbc7ba
fa5e5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2,PodSandboxId:45504ed40a59ec46b898039a0c4979ee6b434cb2ec820414e886e559b1d29ad0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091105496296864,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56ffb1c4e951022e35868f5704d1a
06a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784,PodSandboxId:068e579a79a5648f54ed1af6c7432fbc7c59bf46572587788e70a9e81f4609b1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091105529383732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-813424,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367a35f849068c9dd6ee7c2eb24f15c
9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8aac268b-bc30-49e0-aa45-4a0d9eec0252 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c836b0235de70       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   aac9a42aaca67       storage-provisioner
	b9079ae273223       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   92b342207b58a       busybox
	85dd74b0050d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   256397ebb865f       coredns-6f6b679f8f-4jvnz
	eb30ed4fd51a8       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      20 minutes ago      Running             kube-proxy                1                   2c062223259f3       kube-proxy-j4x48
	cef2e9a618dd4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   aac9a42aaca67       storage-provisioner
	d5fff05f93c77       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      20 minutes ago      Running             kube-apiserver            1                   068e579a79a56       kube-apiserver-default-k8s-diff-port-813424
	8832533edf13e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   c1dd8bd99022f       etcd-default-k8s-diff-port-813424
	faf8db92753dd       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      20 minutes ago      Running             kube-controller-manager   1                   210d84764ce9c       kube-controller-manager-default-k8s-diff-port-813424
	93344a9847519       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      20 minutes ago      Running             kube-scheduler            1                   45504ed40a59e       kube-scheduler-default-k8s-diff-port-813424
	
	
	==> coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33584 - 26231 "HINFO IN 7158233729066554603.5883956134227833022. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012419666s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-813424
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-813424
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=default-k8s-diff-port-813424
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_03_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:03:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-813424
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:32:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:27:34 +0000   Mon, 19 Aug 2024 18:03:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:27:34 +0000   Mon, 19 Aug 2024 18:03:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:27:34 +0000   Mon, 19 Aug 2024 18:03:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:27:34 +0000   Mon, 19 Aug 2024 18:11:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.243
	  Hostname:    default-k8s-diff-port-813424
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08e182672dc747df8c1f0d4f4aaaa876
	  System UUID:                08e18267-2dc7-47df-8c1f-0d4f4aaaa876
	  Boot ID:                    765fbb80-de14-4300-a592-1edf16df4bf0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-4jvnz                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-default-k8s-diff-port-813424                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-813424             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-813424    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-j4x48                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-813424             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-tp742                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-813424 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-813424 event: Registered Node default-k8s-diff-port-813424 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-813424 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-813424 event: Registered Node default-k8s-diff-port-813424 in Controller
	
	
	==> dmesg <==
	[Aug19 18:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051247] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037844] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.853515] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.893637] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.531463] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.425621] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.058498] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057092] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.194710] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.149379] systemd-fstab-generator[689]: Ignoring "noauto" option for root device
	[  +0.301090] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +4.015662] systemd-fstab-generator[815]: Ignoring "noauto" option for root device
	[  +2.027733] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +0.058613] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.531889] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.901349] systemd-fstab-generator[1570]: Ignoring "noauto" option for root device
	[  +3.759782] kauditd_printk_skb: 64 callbacks suppressed
	[Aug19 18:12] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] <==
	{"level":"info","ts":"2024-08-19T18:12:05.319885Z","caller":"traceutil/trace.go:171","msg":"trace[1237329407] linearizableReadLoop","detail":"{readStateIndex:669; appliedIndex:668; }","duration":"440.86588ms","start":"2024-08-19T18:12:04.877095Z","end":"2024-08-19T18:12:05.317961Z","steps":["trace[1237329407] 'read index received'  (duration: 36.279475ms)","trace[1237329407] 'applied index is now lower than readState.Index'  (duration: 404.585073ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:12:05.320334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.598268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424\" ","response":"range_response_count:1 size:6921"}
	{"level":"info","ts":"2024-08-19T18:12:05.320376Z","caller":"traceutil/trace.go:171","msg":"trace[256885946] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-813424; range_end:; response_count:1; response_revision:629; }","duration":"273.64385ms","start":"2024-08-19T18:12:05.046723Z","end":"2024-08-19T18:12:05.320367Z","steps":["trace[256885946] 'agreement among raft nodes before linearized reading'  (duration: 273.511679ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:12:21.453414Z","caller":"traceutil/trace.go:171","msg":"trace[563747473] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"133.902294ms","start":"2024-08-19T18:12:21.319495Z","end":"2024-08-19T18:12:21.453397Z","steps":["trace[563747473] 'process raft request'  (duration: 133.547495ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:12:29.451845Z","caller":"traceutil/trace.go:171","msg":"trace[1743090959] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"123.374319ms","start":"2024-08-19T18:12:29.328452Z","end":"2024-08-19T18:12:29.451826Z","steps":["trace[1743090959] 'process raft request'  (duration: 123.13376ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:12:49.836314Z","caller":"traceutil/trace.go:171","msg":"trace[110017646] transaction","detail":"{read_only:false; response_revision:670; number_of_response:1; }","duration":"104.688583ms","start":"2024-08-19T18:12:49.731591Z","end":"2024-08-19T18:12:49.836280Z","steps":["trace[110017646] 'process raft request'  (duration: 104.559864ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:12:50.039629Z","caller":"traceutil/trace.go:171","msg":"trace[655742769] linearizableReadLoop","detail":"{readStateIndex:721; appliedIndex:719; }","duration":"246.39449ms","start":"2024-08-19T18:12:49.793221Z","end":"2024-08-19T18:12:50.039615Z","steps":["trace[655742769] 'read index received'  (duration: 43.004084ms)","trace[655742769] 'applied index is now lower than readState.Index'  (duration: 203.389707ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:12:50.039927Z","caller":"traceutil/trace.go:171","msg":"trace[25954473] transaction","detail":"{read_only:false; response_revision:671; number_of_response:1; }","duration":"282.096082ms","start":"2024-08-19T18:12:49.757819Z","end":"2024-08-19T18:12:50.039915Z","steps":["trace[25954473] 'process raft request'  (duration: 218.528197ms)","trace[25954473] 'compare'  (duration: 63.131367ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:12:50.040160Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.87744ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T18:12:50.040210Z","caller":"traceutil/trace.go:171","msg":"trace[1042123150] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:671; }","duration":"246.986079ms","start":"2024-08-19T18:12:49.793217Z","end":"2024-08-19T18:12:50.040203Z","steps":["trace[1042123150] 'agreement among raft nodes before linearized reading'  (duration: 246.851289ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:12:50.040409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.914955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-tp742\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-08-19T18:12:50.040645Z","caller":"traceutil/trace.go:171","msg":"trace[248723877] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-tp742; range_end:; response_count:1; response_revision:671; }","duration":"186.151603ms","start":"2024-08-19T18:12:49.854485Z","end":"2024-08-19T18:12:50.040636Z","steps":["trace[248723877] 'agreement among raft nodes before linearized reading'  (duration: 185.833346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:21:02.786428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.499442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.786998Z","caller":"traceutil/trace.go:171","msg":"trace[1203999748] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1080; }","duration":"180.128019ms","start":"2024-08-19T18:21:02.606840Z","end":"2024-08-19T18:21:02.786968Z","steps":["trace[1203999748] 'range keys from in-memory index tree'  (duration: 179.37357ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:21:02.786428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.060274ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.787126Z","caller":"traceutil/trace.go:171","msg":"trace[1957312397] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1080; }","duration":"172.788802ms","start":"2024-08-19T18:21:02.614324Z","end":"2024-08-19T18:21:02.787113Z","steps":["trace[1957312397] 'range keys from in-memory index tree'  (duration: 172.049921ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:21:47.186196Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":872}
	{"level":"info","ts":"2024-08-19T18:21:47.195891Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":872,"took":"9.38176ms","hash":1799941142,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2609152,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-08-19T18:21:47.195944Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1799941142,"revision":872,"compact-revision":-1}
	{"level":"info","ts":"2024-08-19T18:26:47.197902Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1115}
	{"level":"info","ts":"2024-08-19T18:26:47.201616Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1115,"took":"3.359746ms","hash":3896328684,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-19T18:26:47.201706Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3896328684,"revision":1115,"compact-revision":872}
	{"level":"info","ts":"2024-08-19T18:31:47.206605Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1359}
	{"level":"info","ts":"2024-08-19T18:31:47.210946Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1359,"took":"3.866002ms","hash":3676346314,"current-db-size-bytes":2609152,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1544192,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-19T18:31:47.211011Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3676346314,"revision":1359,"compact-revision":1115}
	
	
	==> kernel <==
	 18:32:28 up 21 min,  0 users,  load average: 0.00, 0.04, 0.09
	Linux default-k8s-diff-port-813424 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] <==
	I0819 18:27:49.395551       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:27:49.395755       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:29:49.396544       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:29:49.396693       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 18:29:49.396897       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:29:49.397085       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 18:29:49.397836       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:29:49.399018       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:31:48.395727       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:31:48.395898       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 18:31:49.398141       1 handler_proxy.go:99] no RequestInfo found in the context
	W0819 18:31:49.398185       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:31:49.398409       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0819 18:31:49.398288       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 18:31:49.399630       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:31:49.399721       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] <==
	E0819 18:27:22.105046       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:27:22.594282       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:27:34.827584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-813424"
	E0819 18:27:52.111653       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:27:52.602636       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:28:06.145853       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="245.29µs"
	I0819 18:28:20.145337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.479µs"
	E0819 18:28:22.117800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:28:22.612574       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:28:52.124233       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:28:52.621478       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:29:22.134027       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:29:22.629891       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:29:52.140235       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:29:52.637467       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:30:22.151478       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:30:22.644738       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:30:52.158202       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:30:52.653223       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:31:22.164275       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:31:22.660545       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:31:52.170620       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:31:52.668175       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:32:22.176548       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:32:22.676228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:11:49.782553       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:11:49.795398       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.243"]
	E0819 18:11:49.795470       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:11:49.848847       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:11:49.848887       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:11:49.848915       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:11:49.854360       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:11:49.854812       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:11:49.854839       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:11:49.856704       1 config.go:197] "Starting service config controller"
	I0819 18:11:49.856762       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:11:49.856797       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:11:49.856802       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:11:49.857270       1 config.go:326] "Starting node config controller"
	I0819 18:11:49.857295       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:11:49.957205       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:11:49.957269       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:11:49.957513       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] <==
	I0819 18:11:46.769799       1 serving.go:386] Generated self-signed cert in-memory
	W0819 18:11:48.326857       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 18:11:48.326900       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 18:11:48.326911       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 18:11:48.326919       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 18:11:48.400892       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0819 18:11:48.402726       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:11:48.406393       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0819 18:11:48.406527       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0819 18:11:48.406580       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 18:11:48.406646       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0819 18:11:48.507462       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:31:14 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:14.426943     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092274426460325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:24 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:24.128794     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:31:24 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:24.428903     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092284428536491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:24 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:24.429179     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092284428536491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:34 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:34.432086     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092294431364152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:34 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:34.432173     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092294431364152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:37 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:37.128996     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:31:44 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:44.143723     942 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:31:44 default-k8s-diff-port-813424 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:31:44 default-k8s-diff-port-813424 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:31:44 default-k8s-diff-port-813424 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:31:44 default-k8s-diff-port-813424 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:31:44 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:44.434154     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092304433888574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:44 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:44.434183     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092304433888574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:48 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:48.129107     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:31:54 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:54.436514     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092314435619532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:54 default-k8s-diff-port-813424 kubelet[942]: E0819 18:31:54.436727     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092314435619532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:32:02 default-k8s-diff-port-813424 kubelet[942]: E0819 18:32:02.128392     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:32:04 default-k8s-diff-port-813424 kubelet[942]: E0819 18:32:04.438878     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092324438420634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:32:04 default-k8s-diff-port-813424 kubelet[942]: E0819 18:32:04.439086     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092324438420634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:32:14 default-k8s-diff-port-813424 kubelet[942]: E0819 18:32:14.440891     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092334440587913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:32:14 default-k8s-diff-port-813424 kubelet[942]: E0819 18:32:14.440933     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092334440587913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:32:17 default-k8s-diff-port-813424 kubelet[942]: E0819 18:32:17.131077     942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tp742" podUID="aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb"
	Aug 19 18:32:24 default-k8s-diff-port-813424 kubelet[942]: E0819 18:32:24.442292     942 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092344441913037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:32:24 default-k8s-diff-port-813424 kubelet[942]: E0819 18:32:24.442626     942 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092344441913037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] <==
	I0819 18:12:20.427542       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:12:20.437481       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:12:20.437625       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:12:37.837007       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:12:37.837186       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-813424_1e85614c-1b80-49ff-b874-f378ba5f5dcb!
	I0819 18:12:37.838653       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1aa00ed4-3110-4122-8d29-2b0fbcbbcd49", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-813424_1e85614c-1b80-49ff-b874-f378ba5f5dcb became leader
	I0819 18:12:37.938118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-813424_1e85614c-1b80-49ff-b874-f378ba5f5dcb!
	
	
	==> storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] <==
	I0819 18:11:49.635529       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0819 18:12:19.639408       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-813424 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tp742
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-813424 describe pod metrics-server-6867b74b74-tp742
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-813424 describe pod metrics-server-6867b74b74-tp742: exit status 1 (70.642381ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tp742" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-813424 describe pod metrics-server-6867b74b74-tp742: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (430.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-306581 -n embed-certs-306581
E0819 18:35:08.629497   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:35:08.635847   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:35:08.647210   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:35:08.668695   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:35:08.710105   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-19 18:35:08.786360525 +0000 UTC m=+6170.490528405
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-306581 -n embed-certs-306581
E0819 18:35:08.791802   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:35:08.953525   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-306581 logs -n 25
E0819 18:35:09.274952   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:35:09.916877   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-306581 logs -n 25: (1.244414112s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo cat                            | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo cat                            | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo cat                            | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo docker                         | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo cat                            | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo cat                            | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo cat                            | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo cat                            | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo                                | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo find                           | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p calico-321572 sudo crio                           | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p calico-321572                                     | calico-321572  | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC | 19 Aug 24 18:34 UTC |
	| start   | -p flannel-321572                                    | flannel-321572 | jenkins | v1.33.1 | 19 Aug 24 18:34 UTC |                     |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:34:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:34:41.182757   76968 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:34:41.183047   76968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:34:41.183062   76968 out.go:358] Setting ErrFile to fd 2...
	I0819 18:34:41.183068   76968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:34:41.183281   76968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:34:41.183890   76968 out.go:352] Setting JSON to false
	I0819 18:34:41.185567   76968 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8226,"bootTime":1724084255,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:34:41.185634   76968 start.go:139] virtualization: kvm guest
	I0819 18:34:41.188162   76968 out.go:177] * [flannel-321572] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:34:41.189563   76968 notify.go:220] Checking for updates...
	I0819 18:34:41.189581   76968 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:34:41.190879   76968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:34:41.192309   76968 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:34:41.193810   76968 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:34:41.194925   76968 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:34:41.196397   76968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:34:41.198056   76968 config.go:182] Loaded profile config "custom-flannel-321572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:34:41.198155   76968 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:34:41.198240   76968 config.go:182] Loaded profile config "enable-default-cni-321572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:34:41.198320   76968 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:34:41.236471   76968 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:34:41.237567   76968 start.go:297] selected driver: kvm2
	I0819 18:34:41.237590   76968 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:34:41.237602   76968 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:34:41.238560   76968 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:34:41.238656   76968 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:34:41.254420   76968 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:34:41.254465   76968 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:34:41.254674   76968 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:34:41.254705   76968 cni.go:84] Creating CNI manager for "flannel"
	I0819 18:34:41.254711   76968 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0819 18:34:41.254760   76968 start.go:340] cluster config:
	{Name:flannel-321572 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:34:41.254848   76968 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:34:41.256827   76968 out.go:177] * Starting "flannel-321572" primary control-plane node in "flannel-321572" cluster
	I0819 18:34:37.857979   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:37.858522   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | unable to find current IP address of domain enable-default-cni-321572 in network mk-enable-default-cni-321572
	I0819 18:34:37.858554   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | I0819 18:34:37.858477   75465 retry.go:31] will retry after 2.965647383s: waiting for machine to come up
	I0819 18:34:40.860573   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:40.861115   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | unable to find current IP address of domain enable-default-cni-321572 in network mk-enable-default-cni-321572
	I0819 18:34:40.861139   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | I0819 18:34:40.861068   75465 retry.go:31] will retry after 3.90027233s: waiting for machine to come up
	I0819 18:34:42.314780   73911 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:34:42.314851   73911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:34:42.314947   73911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:34:42.315071   73911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:34:42.315182   73911 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:34:42.315287   73911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:34:42.316973   73911 out.go:235]   - Generating certificates and keys ...
	I0819 18:34:42.317064   73911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:34:42.317132   73911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:34:42.317194   73911 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 18:34:42.317259   73911 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 18:34:42.317340   73911 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 18:34:42.317398   73911 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 18:34:42.317451   73911 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 18:34:42.317607   73911 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-321572 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I0819 18:34:42.317664   73911 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 18:34:42.317766   73911 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-321572 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I0819 18:34:42.317823   73911 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 18:34:42.317886   73911 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 18:34:42.317929   73911 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 18:34:42.317977   73911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:34:42.318025   73911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:34:42.318091   73911 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:34:42.318171   73911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:34:42.318253   73911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:34:42.318353   73911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:34:42.318489   73911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:34:42.318593   73911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:34:42.319869   73911 out.go:235]   - Booting up control plane ...
	I0819 18:34:42.319977   73911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:34:42.320068   73911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:34:42.320155   73911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:34:42.320269   73911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:34:42.320346   73911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:34:42.320384   73911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:34:42.320547   73911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:34:42.320681   73911 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:34:42.320776   73911 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.475162ms
	I0819 18:34:42.320859   73911 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:34:42.320940   73911 kubeadm.go:310] [api-check] The API server is healthy after 5.502001964s
	I0819 18:34:42.321062   73911 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:34:42.321233   73911 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:34:42.321318   73911 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:34:42.321516   73911 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-321572 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:34:42.321590   73911 kubeadm.go:310] [bootstrap-token] Using token: vrz0sq.wwt4iittshtreloc
	I0819 18:34:42.323133   73911 out.go:235]   - Configuring RBAC rules ...
	I0819 18:34:42.323241   73911 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:34:42.323341   73911 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:34:42.323468   73911 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:34:42.323615   73911 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:34:42.323792   73911 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:34:42.323908   73911 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:34:42.324065   73911 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:34:42.324117   73911 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:34:42.324166   73911 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:34:42.324172   73911 kubeadm.go:310] 
	I0819 18:34:42.324220   73911 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:34:42.324226   73911 kubeadm.go:310] 
	I0819 18:34:42.324318   73911 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:34:42.324324   73911 kubeadm.go:310] 
	I0819 18:34:42.324345   73911 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:34:42.324399   73911 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:34:42.324478   73911 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:34:42.324509   73911 kubeadm.go:310] 
	I0819 18:34:42.324596   73911 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:34:42.324608   73911 kubeadm.go:310] 
	I0819 18:34:42.324655   73911 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:34:42.324663   73911 kubeadm.go:310] 
	I0819 18:34:42.324709   73911 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:34:42.324802   73911 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:34:42.324868   73911 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:34:42.324875   73911 kubeadm.go:310] 
	I0819 18:34:42.324943   73911 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:34:42.325032   73911 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:34:42.325038   73911 kubeadm.go:310] 
	I0819 18:34:42.325125   73911 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vrz0sq.wwt4iittshtreloc \
	I0819 18:34:42.325248   73911 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:34:42.325287   73911 kubeadm.go:310] 	--control-plane 
	I0819 18:34:42.325296   73911 kubeadm.go:310] 
	I0819 18:34:42.325412   73911 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:34:42.325425   73911 kubeadm.go:310] 
	I0819 18:34:42.325496   73911 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vrz0sq.wwt4iittshtreloc \
	I0819 18:34:42.325598   73911 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:34:42.325615   73911 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0819 18:34:42.327184   73911 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0819 18:34:41.258201   76968 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:34:41.258240   76968 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:34:41.258257   76968 cache.go:56] Caching tarball of preloaded images
	I0819 18:34:41.258348   76968 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:34:41.258373   76968 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:34:41.258460   76968 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/config.json ...
	I0819 18:34:41.258479   76968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/config.json: {Name:mk52349b573bcd51c55cab83e9c813a6e835473f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:34:41.258646   76968 start.go:360] acquireMachinesLock for flannel-321572: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:34:44.765980   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:44.766457   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | unable to find current IP address of domain enable-default-cni-321572 in network mk-enable-default-cni-321572
	I0819 18:34:44.766480   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | I0819 18:34:44.766420   75465 retry.go:31] will retry after 5.509869399s: waiting for machine to come up
	I0819 18:34:42.328437   73911 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 18:34:42.328482   73911 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0819 18:34:42.335521   73911 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0819 18:34:42.335554   73911 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0819 18:34:42.358981   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 18:34:42.774936   73911 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:34:42.775008   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:34:42.775043   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-321572 minikube.k8s.io/updated_at=2024_08_19T18_34_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=custom-flannel-321572 minikube.k8s.io/primary=true
	I0819 18:34:42.834784   73911 ops.go:34] apiserver oom_adj: -16
	I0819 18:34:42.922630   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:34:43.422893   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:34:43.923200   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:34:44.423534   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:34:44.923662   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:34:45.423030   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:34:45.922951   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:34:46.422869   73911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:34:46.509103   73911 kubeadm.go:1113] duration metric: took 3.734149303s to wait for elevateKubeSystemPrivileges
	I0819 18:34:46.509136   73911 kubeadm.go:394] duration metric: took 14.917949088s to StartCluster
	I0819 18:34:46.509154   73911 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:34:46.509233   73911 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:34:46.510983   73911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:34:46.511204   73911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 18:34:46.511227   73911 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:34:46.511286   73911 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-321572"
	I0819 18:34:46.511313   73911 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-321572"
	I0819 18:34:46.511204   73911 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:34:46.511331   73911 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-321572"
	I0819 18:34:46.511344   73911 host.go:66] Checking if "custom-flannel-321572" exists ...
	I0819 18:34:46.511360   73911 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-321572"
	I0819 18:34:46.511417   73911 config.go:182] Loaded profile config "custom-flannel-321572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:34:46.511747   73911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:34:46.511771   73911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:34:46.511858   73911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:34:46.511904   73911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:34:46.512558   73911 out.go:177] * Verifying Kubernetes components...
	I0819 18:34:46.514560   73911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:34:46.528298   73911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0819 18:34:46.528354   73911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38645
	I0819 18:34:46.528765   73911 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:34:46.528818   73911 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:34:46.529354   73911 main.go:141] libmachine: Using API Version  1
	I0819 18:34:46.529379   73911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:34:46.529462   73911 main.go:141] libmachine: Using API Version  1
	I0819 18:34:46.529483   73911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:34:46.529757   73911 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:34:46.529841   73911 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:34:46.529958   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetState
	I0819 18:34:46.530447   73911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:34:46.530480   73911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:34:46.534165   73911 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-321572"
	I0819 18:34:46.534208   73911 host.go:66] Checking if "custom-flannel-321572" exists ...
	I0819 18:34:46.534603   73911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:34:46.534636   73911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:34:46.546889   73911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
	I0819 18:34:46.547418   73911 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:34:46.547933   73911 main.go:141] libmachine: Using API Version  1
	I0819 18:34:46.547952   73911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:34:46.548316   73911 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:34:46.548539   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetState
	I0819 18:34:46.550583   73911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39025
	I0819 18:34:46.550726   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .DriverName
	I0819 18:34:46.552380   73911 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:34:46.552601   73911 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:34:46.552872   73911 main.go:141] libmachine: Using API Version  1
	I0819 18:34:46.552896   73911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:34:46.553221   73911 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:34:46.553729   73911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:34:46.553768   73911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:34:46.554031   73911 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:34:46.554051   73911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:34:46.554068   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetSSHHostname
	I0819 18:34:46.557501   73911 main.go:141] libmachine: (custom-flannel-321572) DBG | domain custom-flannel-321572 has defined MAC address 52:54:00:1f:50:2a in network mk-custom-flannel-321572
	I0819 18:34:46.558271   73911 main.go:141] libmachine: (custom-flannel-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:50:2a", ip: ""} in network mk-custom-flannel-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:34:11 +0000 UTC Type:0 Mac:52:54:00:1f:50:2a Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-321572 Clientid:01:52:54:00:1f:50:2a}
	I0819 18:34:46.558293   73911 main.go:141] libmachine: (custom-flannel-321572) DBG | domain custom-flannel-321572 has defined IP address 192.168.39.9 and MAC address 52:54:00:1f:50:2a in network mk-custom-flannel-321572
	I0819 18:34:46.558514   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetSSHPort
	I0819 18:34:46.558700   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetSSHKeyPath
	I0819 18:34:46.558903   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetSSHUsername
	I0819 18:34:46.559069   73911 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/custom-flannel-321572/id_rsa Username:docker}
	I0819 18:34:46.572109   73911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
	I0819 18:34:46.572793   73911 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:34:46.573344   73911 main.go:141] libmachine: Using API Version  1
	I0819 18:34:46.573362   73911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:34:46.573664   73911 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:34:46.573832   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetState
	I0819 18:34:46.575698   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .DriverName
	I0819 18:34:46.576247   73911 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:34:46.576259   73911 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:34:46.576273   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetSSHHostname
	I0819 18:34:46.580038   73911 main.go:141] libmachine: (custom-flannel-321572) DBG | domain custom-flannel-321572 has defined MAC address 52:54:00:1f:50:2a in network mk-custom-flannel-321572
	I0819 18:34:46.580480   73911 main.go:141] libmachine: (custom-flannel-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:50:2a", ip: ""} in network mk-custom-flannel-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:34:11 +0000 UTC Type:0 Mac:52:54:00:1f:50:2a Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:custom-flannel-321572 Clientid:01:52:54:00:1f:50:2a}
	I0819 18:34:46.580516   73911 main.go:141] libmachine: (custom-flannel-321572) DBG | domain custom-flannel-321572 has defined IP address 192.168.39.9 and MAC address 52:54:00:1f:50:2a in network mk-custom-flannel-321572
	I0819 18:34:46.580741   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetSSHPort
	I0819 18:34:46.580967   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetSSHKeyPath
	I0819 18:34:46.581187   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .GetSSHUsername
	I0819 18:34:46.581422   73911 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/custom-flannel-321572/id_rsa Username:docker}
	I0819 18:34:46.796688   73911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:34:46.796774   73911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 18:34:46.918746   73911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:34:46.954094   73911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:34:47.333141   73911 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 18:34:47.334744   73911 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-321572" to be "Ready" ...
	I0819 18:34:47.642280   73911 main.go:141] libmachine: Making call to close driver server
	I0819 18:34:47.642312   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .Close
	I0819 18:34:47.642281   73911 main.go:141] libmachine: Making call to close driver server
	I0819 18:34:47.642381   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .Close
	I0819 18:34:47.642643   73911 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:34:47.642648   73911 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:34:47.642660   73911 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:34:47.642668   73911 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:34:47.642673   73911 main.go:141] libmachine: Making call to close driver server
	I0819 18:34:47.642678   73911 main.go:141] libmachine: Making call to close driver server
	I0819 18:34:47.642684   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .Close
	I0819 18:34:47.642688   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .Close
	I0819 18:34:47.642895   73911 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:34:47.642912   73911 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:34:47.643082   73911 main.go:141] libmachine: (custom-flannel-321572) DBG | Closing plugin on server side
	I0819 18:34:47.643105   73911 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:34:47.643125   73911 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:34:47.658060   73911 main.go:141] libmachine: Making call to close driver server
	I0819 18:34:47.658079   73911 main.go:141] libmachine: (custom-flannel-321572) Calling .Close
	I0819 18:34:47.658390   73911 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:34:47.658412   73911 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:34:47.660249   73911 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 18:34:50.278193   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.278661   75222 main.go:141] libmachine: (enable-default-cni-321572) Found IP for machine: 192.168.50.233
	I0819 18:34:50.278688   75222 main.go:141] libmachine: (enable-default-cni-321572) Reserving static IP address...
	I0819 18:34:50.278702   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has current primary IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.279143   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-321572", mac: "52:54:00:a8:4e:eb", ip: "192.168.50.233"} in network mk-enable-default-cni-321572
	I0819 18:34:50.358846   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | Getting to WaitForSSH function...
	I0819 18:34:50.358880   75222 main.go:141] libmachine: (enable-default-cni-321572) Reserved static IP address: 192.168.50.233
	I0819 18:34:50.358909   75222 main.go:141] libmachine: (enable-default-cni-321572) Waiting for SSH to be available...
	I0819 18:34:50.361840   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.362520   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:50.362549   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.362669   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | Using SSH client type: external
	I0819 18:34:50.362698   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/enable-default-cni-321572/id_rsa (-rw-------)
	I0819 18:34:50.362736   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/enable-default-cni-321572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:34:50.362755   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | About to run SSH command:
	I0819 18:34:50.362771   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | exit 0
	I0819 18:34:50.489168   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | SSH cmd err, output: <nil>: 
	I0819 18:34:50.489444   75222 main.go:141] libmachine: (enable-default-cni-321572) KVM machine creation complete!
	I0819 18:34:50.489772   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetConfigRaw
	I0819 18:34:50.490305   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .DriverName
	I0819 18:34:50.490551   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .DriverName
	I0819 18:34:50.490702   75222 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:34:50.490717   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetState
	I0819 18:34:50.492027   75222 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:34:50.492043   75222 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:34:50.492052   75222 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:34:50.492061   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:50.494701   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.495123   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:50.495144   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.495347   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHPort
	I0819 18:34:50.495517   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:50.495697   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:50.495894   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHUsername
	I0819 18:34:50.496072   75222 main.go:141] libmachine: Using SSH client type: native
	I0819 18:34:50.496295   75222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 18:34:50.496308   75222 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:34:50.599967   75222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:34:50.599989   75222 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:34:50.599997   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:50.602729   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.603129   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:50.603163   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.603474   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHPort
	I0819 18:34:50.603679   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:50.603834   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:50.603972   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHUsername
	I0819 18:34:50.604107   75222 main.go:141] libmachine: Using SSH client type: native
	I0819 18:34:50.604283   75222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 18:34:50.604297   75222 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:34:50.709449   75222 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:34:50.709552   75222 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:34:50.709571   75222 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:34:50.709581   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetMachineName
	I0819 18:34:50.709857   75222 buildroot.go:166] provisioning hostname "enable-default-cni-321572"
	I0819 18:34:50.709880   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetMachineName
	I0819 18:34:50.710060   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:50.712649   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.713070   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:50.713097   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.713281   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHPort
	I0819 18:34:50.713516   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:50.713691   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:50.713844   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHUsername
	I0819 18:34:50.714070   75222 main.go:141] libmachine: Using SSH client type: native
	I0819 18:34:50.714273   75222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 18:34:50.714291   75222 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-321572 && echo "enable-default-cni-321572" | sudo tee /etc/hostname
	I0819 18:34:50.838616   75222 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-321572
	
	I0819 18:34:50.838648   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:50.841475   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.841838   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:50.841878   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.842052   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHPort
	I0819 18:34:50.842239   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:50.842418   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:50.842622   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHUsername
	I0819 18:34:50.842900   75222 main.go:141] libmachine: Using SSH client type: native
	I0819 18:34:50.843086   75222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 18:34:50.843104   75222 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-321572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-321572/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-321572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:34:50.958008   75222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:34:50.958046   75222 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 18:34:50.958091   75222 buildroot.go:174] setting up certificates
	I0819 18:34:50.958114   75222 provision.go:84] configureAuth start
	I0819 18:34:50.958143   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetMachineName
	I0819 18:34:50.958454   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetIP
	I0819 18:34:50.961790   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.962187   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:50.962221   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.962371   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:50.965050   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.965437   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:50.965502   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:50.965662   75222 provision.go:143] copyHostCerts
	I0819 18:34:50.965738   75222 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 18:34:50.965752   75222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 18:34:50.965817   75222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 18:34:50.965937   75222 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 18:34:50.965949   75222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 18:34:50.965981   75222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 18:34:50.966058   75222 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 18:34:50.966069   75222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 18:34:50.966096   75222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 18:34:50.966157   75222 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-321572 san=[127.0.0.1 192.168.50.233 enable-default-cni-321572 localhost minikube]
	I0819 18:34:51.145849   75222 provision.go:177] copyRemoteCerts
	I0819 18:34:51.145902   75222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:34:51.145926   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:51.149097   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.149483   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:51.149522   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.149679   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHPort
	I0819 18:34:51.149915   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:51.150102   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHUsername
	I0819 18:34:51.150250   75222 sshutil.go:53] new ssh client: &{IP:192.168.50.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/enable-default-cni-321572/id_rsa Username:docker}
	I0819 18:34:51.235280   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:34:51.259045   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0819 18:34:51.283387   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:34:51.305543   75222 provision.go:87] duration metric: took 347.413304ms to configureAuth
	I0819 18:34:51.305570   75222 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:34:51.305784   75222 config.go:182] Loaded profile config "enable-default-cni-321572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:34:51.305874   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:51.308392   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.308793   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:51.308832   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.308942   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHPort
	I0819 18:34:51.309106   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:51.309266   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:51.309371   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHUsername
	I0819 18:34:51.309562   75222 main.go:141] libmachine: Using SSH client type: native
	I0819 18:34:51.309764   75222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 18:34:51.309796   75222 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:34:47.661837   73911 addons.go:510] duration metric: took 1.150611237s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 18:34:47.838348   73911 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-321572" context rescaled to 1 replicas
	I0819 18:34:49.338577   73911 node_ready.go:53] node "custom-flannel-321572" has status "Ready":"False"
	I0819 18:34:51.341231   73911 node_ready.go:53] node "custom-flannel-321572" has status "Ready":"False"
	I0819 18:34:51.821491   76968 start.go:364] duration metric: took 10.562794102s to acquireMachinesLock for "flannel-321572"
	I0819 18:34:51.821570   76968 start.go:93] Provisioning new machine with config: &{Name:flannel-321572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:flannel-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:34:51.821680   76968 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 18:34:51.580126   75222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:34:51.580151   75222 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:34:51.580163   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetURL
	I0819 18:34:51.581604   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | Using libvirt version 6000000
	I0819 18:34:51.583812   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.584189   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:51.584219   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.584383   75222 main.go:141] libmachine: Docker is up and running!
	I0819 18:34:51.584399   75222 main.go:141] libmachine: Reticulating splines...
	I0819 18:34:51.584407   75222 client.go:171] duration metric: took 26.704167542s to LocalClient.Create
	I0819 18:34:51.584451   75222 start.go:167] duration metric: took 26.704227496s to libmachine.API.Create "enable-default-cni-321572"
	I0819 18:34:51.584461   75222 start.go:293] postStartSetup for "enable-default-cni-321572" (driver="kvm2")
	I0819 18:34:51.584476   75222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:34:51.584497   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .DriverName
	I0819 18:34:51.584793   75222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:34:51.584824   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:51.586762   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.587042   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:51.587070   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.587153   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHPort
	I0819 18:34:51.587364   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:51.587480   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHUsername
	I0819 18:34:51.587647   75222 sshutil.go:53] new ssh client: &{IP:192.168.50.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/enable-default-cni-321572/id_rsa Username:docker}
	I0819 18:34:51.670678   75222 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:34:51.674931   75222 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:34:51.674956   75222 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 18:34:51.675026   75222 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 18:34:51.675115   75222 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 18:34:51.675242   75222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:34:51.684099   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:34:51.706431   75222 start.go:296] duration metric: took 121.937769ms for postStartSetup
	I0819 18:34:51.706482   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetConfigRaw
	I0819 18:34:51.707079   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetIP
	I0819 18:34:51.709724   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.710040   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:51.710063   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.710360   75222 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/config.json ...
	I0819 18:34:51.710540   75222 start.go:128] duration metric: took 26.852464126s to createHost
	I0819 18:34:51.710561   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:51.712896   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.713216   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:51.713240   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.713427   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHPort
	I0819 18:34:51.713581   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:51.713718   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:51.713869   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHUsername
	I0819 18:34:51.713976   75222 main.go:141] libmachine: Using SSH client type: native
	I0819 18:34:51.714130   75222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.233 22 <nil> <nil>}
	I0819 18:34:51.714141   75222 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:34:51.821339   75222 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724092491.777936104
	
	I0819 18:34:51.821362   75222 fix.go:216] guest clock: 1724092491.777936104
	I0819 18:34:51.821369   75222 fix.go:229] Guest: 2024-08-19 18:34:51.777936104 +0000 UTC Remote: 2024-08-19 18:34:51.710550978 +0000 UTC m=+45.420587591 (delta=67.385126ms)
	I0819 18:34:51.821388   75222 fix.go:200] guest clock delta is within tolerance: 67.385126ms
	I0819 18:34:51.821392   75222 start.go:83] releasing machines lock for "enable-default-cni-321572", held for 26.96352655s
	I0819 18:34:51.821414   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .DriverName
	I0819 18:34:51.821703   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetIP
	I0819 18:34:51.824467   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.824872   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:51.824903   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.825066   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .DriverName
	I0819 18:34:51.825610   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .DriverName
	I0819 18:34:51.825786   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .DriverName
	I0819 18:34:51.825857   75222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:34:51.825909   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:51.826023   75222 ssh_runner.go:195] Run: cat /version.json
	I0819 18:34:51.826048   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHHostname
	I0819 18:34:51.828710   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.828971   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.829095   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:51.829124   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.829318   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHPort
	I0819 18:34:51.829441   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:51.829485   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:51.829514   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:51.829713   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHUsername
	I0819 18:34:51.829771   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHPort
	I0819 18:34:51.829858   75222 sshutil.go:53] new ssh client: &{IP:192.168.50.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/enable-default-cni-321572/id_rsa Username:docker}
	I0819 18:34:51.829931   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHKeyPath
	I0819 18:34:51.830049   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetSSHUsername
	I0819 18:34:51.830155   75222 sshutil.go:53] new ssh client: &{IP:192.168.50.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/enable-default-cni-321572/id_rsa Username:docker}
	I0819 18:34:51.913839   75222 ssh_runner.go:195] Run: systemctl --version
	I0819 18:34:51.952592   75222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:34:52.111772   75222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:34:52.117793   75222 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:34:52.117874   75222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:34:52.133383   75222 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:34:52.133408   75222 start.go:495] detecting cgroup driver to use...
	I0819 18:34:52.133463   75222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:34:52.149969   75222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:34:52.164021   75222 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:34:52.164090   75222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:34:52.177845   75222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:34:52.192570   75222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:34:52.304742   75222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:34:52.455851   75222 docker.go:233] disabling docker service ...
	I0819 18:34:52.455927   75222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:34:52.471906   75222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:34:52.485114   75222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:34:52.633007   75222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:34:52.769550   75222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:34:52.788422   75222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:34:52.810558   75222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:34:52.810642   75222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:34:52.821129   75222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:34:52.821182   75222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:34:52.833403   75222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:34:52.845358   75222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:34:52.857735   75222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:34:52.869968   75222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:34:52.881323   75222 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:34:52.903207   75222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:34:52.915445   75222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:34:52.928971   75222 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:34:52.929037   75222 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:34:52.944535   75222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:34:52.956096   75222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:34:53.080425   75222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:34:53.263755   75222 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:34:53.263838   75222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:34:53.269807   75222 start.go:563] Will wait 60s for crictl version
	I0819 18:34:53.269867   75222 ssh_runner.go:195] Run: which crictl
	I0819 18:34:53.274578   75222 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:34:53.318824   75222 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:34:53.318939   75222 ssh_runner.go:195] Run: crio --version
	I0819 18:34:53.349789   75222 ssh_runner.go:195] Run: crio --version
	I0819 18:34:53.383138   75222 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:34:51.823500   76968 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 18:34:51.823674   76968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:34:51.823733   76968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:34:51.840569   76968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0819 18:34:51.841111   76968 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:34:51.841696   76968 main.go:141] libmachine: Using API Version  1
	I0819 18:34:51.841716   76968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:34:51.842010   76968 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:34:51.842170   76968 main.go:141] libmachine: (flannel-321572) Calling .GetMachineName
	I0819 18:34:51.842314   76968 main.go:141] libmachine: (flannel-321572) Calling .DriverName
	I0819 18:34:51.842456   76968 start.go:159] libmachine.API.Create for "flannel-321572" (driver="kvm2")
	I0819 18:34:51.842484   76968 client.go:168] LocalClient.Create starting
	I0819 18:34:51.842515   76968 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 18:34:51.842553   76968 main.go:141] libmachine: Decoding PEM data...
	I0819 18:34:51.842569   76968 main.go:141] libmachine: Parsing certificate...
	I0819 18:34:51.842638   76968 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 18:34:51.842657   76968 main.go:141] libmachine: Decoding PEM data...
	I0819 18:34:51.842669   76968 main.go:141] libmachine: Parsing certificate...
	I0819 18:34:51.842690   76968 main.go:141] libmachine: Running pre-create checks...
	I0819 18:34:51.842698   76968 main.go:141] libmachine: (flannel-321572) Calling .PreCreateCheck
	I0819 18:34:51.843019   76968 main.go:141] libmachine: (flannel-321572) Calling .GetConfigRaw
	I0819 18:34:51.843377   76968 main.go:141] libmachine: Creating machine...
	I0819 18:34:51.843389   76968 main.go:141] libmachine: (flannel-321572) Calling .Create
	I0819 18:34:51.843509   76968 main.go:141] libmachine: (flannel-321572) Creating KVM machine...
	I0819 18:34:51.844656   76968 main.go:141] libmachine: (flannel-321572) DBG | found existing default KVM network
	I0819 18:34:51.846082   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:51.845834   77080 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8e:ef:63} reservation:<nil>}
	I0819 18:34:51.847225   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:51.847135   77080 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:5d:12} reservation:<nil>}
	I0819 18:34:51.848420   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:51.848338   77080 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000308a90}
	I0819 18:34:51.848471   76968 main.go:141] libmachine: (flannel-321572) DBG | created network xml: 
	I0819 18:34:51.848492   76968 main.go:141] libmachine: (flannel-321572) DBG | <network>
	I0819 18:34:51.848506   76968 main.go:141] libmachine: (flannel-321572) DBG |   <name>mk-flannel-321572</name>
	I0819 18:34:51.848514   76968 main.go:141] libmachine: (flannel-321572) DBG |   <dns enable='no'/>
	I0819 18:34:51.848536   76968 main.go:141] libmachine: (flannel-321572) DBG |   
	I0819 18:34:51.848551   76968 main.go:141] libmachine: (flannel-321572) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0819 18:34:51.848560   76968 main.go:141] libmachine: (flannel-321572) DBG |     <dhcp>
	I0819 18:34:51.848571   76968 main.go:141] libmachine: (flannel-321572) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0819 18:34:51.848580   76968 main.go:141] libmachine: (flannel-321572) DBG |     </dhcp>
	I0819 18:34:51.848587   76968 main.go:141] libmachine: (flannel-321572) DBG |   </ip>
	I0819 18:34:51.848596   76968 main.go:141] libmachine: (flannel-321572) DBG |   
	I0819 18:34:51.848606   76968 main.go:141] libmachine: (flannel-321572) DBG | </network>
	I0819 18:34:51.848616   76968 main.go:141] libmachine: (flannel-321572) DBG | 
	I0819 18:34:51.854646   76968 main.go:141] libmachine: (flannel-321572) DBG | trying to create private KVM network mk-flannel-321572 192.168.61.0/24...
	I0819 18:34:51.928357   76968 main.go:141] libmachine: (flannel-321572) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/flannel-321572 ...
	I0819 18:34:51.928390   76968 main.go:141] libmachine: (flannel-321572) DBG | private KVM network mk-flannel-321572 192.168.61.0/24 created
	I0819 18:34:51.928407   76968 main.go:141] libmachine: (flannel-321572) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:34:51.928444   76968 main.go:141] libmachine: (flannel-321572) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:34:51.928461   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:51.928312   77080 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:34:52.184578   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:52.184428   77080 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/flannel-321572/id_rsa...
	I0819 18:34:52.332493   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:52.332357   77080 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/flannel-321572/flannel-321572.rawdisk...
	I0819 18:34:52.332523   76968 main.go:141] libmachine: (flannel-321572) DBG | Writing magic tar header
	I0819 18:34:52.332539   76968 main.go:141] libmachine: (flannel-321572) DBG | Writing SSH key tar header
	I0819 18:34:52.332552   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:52.332511   77080 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/flannel-321572 ...
	I0819 18:34:52.332654   76968 main.go:141] libmachine: (flannel-321572) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/flannel-321572
	I0819 18:34:52.332683   76968 main.go:141] libmachine: (flannel-321572) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/flannel-321572 (perms=drwx------)
	I0819 18:34:52.332703   76968 main.go:141] libmachine: (flannel-321572) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 18:34:52.332719   76968 main.go:141] libmachine: (flannel-321572) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:34:52.332733   76968 main.go:141] libmachine: (flannel-321572) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:34:52.332778   76968 main.go:141] libmachine: (flannel-321572) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 18:34:52.332796   76968 main.go:141] libmachine: (flannel-321572) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 18:34:52.332810   76968 main.go:141] libmachine: (flannel-321572) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 18:34:52.332834   76968 main.go:141] libmachine: (flannel-321572) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:34:52.332845   76968 main.go:141] libmachine: (flannel-321572) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:34:52.332857   76968 main.go:141] libmachine: (flannel-321572) DBG | Checking permissions on dir: /home
	I0819 18:34:52.332866   76968 main.go:141] libmachine: (flannel-321572) DBG | Skipping /home - not owner
	I0819 18:34:52.332878   76968 main.go:141] libmachine: (flannel-321572) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:34:52.332890   76968 main.go:141] libmachine: (flannel-321572) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:34:52.332903   76968 main.go:141] libmachine: (flannel-321572) Creating domain...
	I0819 18:34:52.334173   76968 main.go:141] libmachine: (flannel-321572) define libvirt domain using xml: 
	I0819 18:34:52.334200   76968 main.go:141] libmachine: (flannel-321572) <domain type='kvm'>
	I0819 18:34:52.334208   76968 main.go:141] libmachine: (flannel-321572)   <name>flannel-321572</name>
	I0819 18:34:52.334214   76968 main.go:141] libmachine: (flannel-321572)   <memory unit='MiB'>3072</memory>
	I0819 18:34:52.334222   76968 main.go:141] libmachine: (flannel-321572)   <vcpu>2</vcpu>
	I0819 18:34:52.334229   76968 main.go:141] libmachine: (flannel-321572)   <features>
	I0819 18:34:52.334239   76968 main.go:141] libmachine: (flannel-321572)     <acpi/>
	I0819 18:34:52.334255   76968 main.go:141] libmachine: (flannel-321572)     <apic/>
	I0819 18:34:52.334264   76968 main.go:141] libmachine: (flannel-321572)     <pae/>
	I0819 18:34:52.334270   76968 main.go:141] libmachine: (flannel-321572)     
	I0819 18:34:52.334276   76968 main.go:141] libmachine: (flannel-321572)   </features>
	I0819 18:34:52.334283   76968 main.go:141] libmachine: (flannel-321572)   <cpu mode='host-passthrough'>
	I0819 18:34:52.334288   76968 main.go:141] libmachine: (flannel-321572)   
	I0819 18:34:52.334295   76968 main.go:141] libmachine: (flannel-321572)   </cpu>
	I0819 18:34:52.334326   76968 main.go:141] libmachine: (flannel-321572)   <os>
	I0819 18:34:52.334365   76968 main.go:141] libmachine: (flannel-321572)     <type>hvm</type>
	I0819 18:34:52.334379   76968 main.go:141] libmachine: (flannel-321572)     <boot dev='cdrom'/>
	I0819 18:34:52.334388   76968 main.go:141] libmachine: (flannel-321572)     <boot dev='hd'/>
	I0819 18:34:52.334399   76968 main.go:141] libmachine: (flannel-321572)     <bootmenu enable='no'/>
	I0819 18:34:52.334408   76968 main.go:141] libmachine: (flannel-321572)   </os>
	I0819 18:34:52.334415   76968 main.go:141] libmachine: (flannel-321572)   <devices>
	I0819 18:34:52.334426   76968 main.go:141] libmachine: (flannel-321572)     <disk type='file' device='cdrom'>
	I0819 18:34:52.334442   76968 main.go:141] libmachine: (flannel-321572)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/flannel-321572/boot2docker.iso'/>
	I0819 18:34:52.334461   76968 main.go:141] libmachine: (flannel-321572)       <target dev='hdc' bus='scsi'/>
	I0819 18:34:52.334478   76968 main.go:141] libmachine: (flannel-321572)       <readonly/>
	I0819 18:34:52.334491   76968 main.go:141] libmachine: (flannel-321572)     </disk>
	I0819 18:34:52.334499   76968 main.go:141] libmachine: (flannel-321572)     <disk type='file' device='disk'>
	I0819 18:34:52.334511   76968 main.go:141] libmachine: (flannel-321572)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:34:52.334527   76968 main.go:141] libmachine: (flannel-321572)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/flannel-321572/flannel-321572.rawdisk'/>
	I0819 18:34:52.334536   76968 main.go:141] libmachine: (flannel-321572)       <target dev='hda' bus='virtio'/>
	I0819 18:34:52.334543   76968 main.go:141] libmachine: (flannel-321572)     </disk>
	I0819 18:34:52.334555   76968 main.go:141] libmachine: (flannel-321572)     <interface type='network'>
	I0819 18:34:52.334566   76968 main.go:141] libmachine: (flannel-321572)       <source network='mk-flannel-321572'/>
	I0819 18:34:52.334574   76968 main.go:141] libmachine: (flannel-321572)       <model type='virtio'/>
	I0819 18:34:52.334584   76968 main.go:141] libmachine: (flannel-321572)     </interface>
	I0819 18:34:52.334612   76968 main.go:141] libmachine: (flannel-321572)     <interface type='network'>
	I0819 18:34:52.334633   76968 main.go:141] libmachine: (flannel-321572)       <source network='default'/>
	I0819 18:34:52.334644   76968 main.go:141] libmachine: (flannel-321572)       <model type='virtio'/>
	I0819 18:34:52.334655   76968 main.go:141] libmachine: (flannel-321572)     </interface>
	I0819 18:34:52.334665   76968 main.go:141] libmachine: (flannel-321572)     <serial type='pty'>
	I0819 18:34:52.334676   76968 main.go:141] libmachine: (flannel-321572)       <target port='0'/>
	I0819 18:34:52.334686   76968 main.go:141] libmachine: (flannel-321572)     </serial>
	I0819 18:34:52.334698   76968 main.go:141] libmachine: (flannel-321572)     <console type='pty'>
	I0819 18:34:52.334710   76968 main.go:141] libmachine: (flannel-321572)       <target type='serial' port='0'/>
	I0819 18:34:52.334719   76968 main.go:141] libmachine: (flannel-321572)     </console>
	I0819 18:34:52.334733   76968 main.go:141] libmachine: (flannel-321572)     <rng model='virtio'>
	I0819 18:34:52.334747   76968 main.go:141] libmachine: (flannel-321572)       <backend model='random'>/dev/random</backend>
	I0819 18:34:52.334777   76968 main.go:141] libmachine: (flannel-321572)     </rng>
	I0819 18:34:52.334793   76968 main.go:141] libmachine: (flannel-321572)     
	I0819 18:34:52.334802   76968 main.go:141] libmachine: (flannel-321572)     
	I0819 18:34:52.334815   76968 main.go:141] libmachine: (flannel-321572)   </devices>
	I0819 18:34:52.334826   76968 main.go:141] libmachine: (flannel-321572) </domain>
	I0819 18:34:52.334834   76968 main.go:141] libmachine: (flannel-321572) 
	I0819 18:34:52.339428   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:de:9f:92 in network default
	I0819 18:34:52.339997   76968 main.go:141] libmachine: (flannel-321572) Ensuring networks are active...
	I0819 18:34:52.340022   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:52.340771   76968 main.go:141] libmachine: (flannel-321572) Ensuring network default is active
	I0819 18:34:52.341250   76968 main.go:141] libmachine: (flannel-321572) Ensuring network mk-flannel-321572 is active
	I0819 18:34:52.341867   76968 main.go:141] libmachine: (flannel-321572) Getting domain xml...
	I0819 18:34:52.343003   76968 main.go:141] libmachine: (flannel-321572) Creating domain...
	I0819 18:34:53.874815   76968 main.go:141] libmachine: (flannel-321572) Waiting to get IP...
	I0819 18:34:53.875713   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:53.876558   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:34:53.876585   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:53.876524   77080 retry.go:31] will retry after 255.346711ms: waiting for machine to come up
	I0819 18:34:54.133952   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:54.134705   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:34:54.134735   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:54.134651   77080 retry.go:31] will retry after 292.952462ms: waiting for machine to come up
	I0819 18:34:54.429216   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:54.429938   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:34:54.429968   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:54.429891   77080 retry.go:31] will retry after 375.505693ms: waiting for machine to come up
	I0819 18:34:54.807624   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:54.808382   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:34:54.808413   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:54.808295   77080 retry.go:31] will retry after 563.24019ms: waiting for machine to come up
	I0819 18:34:55.373271   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:55.373876   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:34:55.373908   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:55.373848   77080 retry.go:31] will retry after 578.543704ms: waiting for machine to come up
	I0819 18:34:55.954643   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:55.955150   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:34:55.955182   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:55.955126   77080 retry.go:31] will retry after 667.588618ms: waiting for machine to come up
	I0819 18:34:53.384249   75222 main.go:141] libmachine: (enable-default-cni-321572) Calling .GetIP
	I0819 18:34:53.387571   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:53.388010   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:4e:eb", ip: ""} in network mk-enable-default-cni-321572: {Iface:virbr2 ExpiryTime:2024-08-19 19:34:40 +0000 UTC Type:0 Mac:52:54:00:a8:4e:eb Iaid: IPaddr:192.168.50.233 Prefix:24 Hostname:enable-default-cni-321572 Clientid:01:52:54:00:a8:4e:eb}
	I0819 18:34:53.388041   75222 main.go:141] libmachine: (enable-default-cni-321572) DBG | domain enable-default-cni-321572 has defined IP address 192.168.50.233 and MAC address 52:54:00:a8:4e:eb in network mk-enable-default-cni-321572
	I0819 18:34:53.388263   75222 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0819 18:34:53.392680   75222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:34:53.409079   75222 kubeadm.go:883] updating cluster {Name:enable-default-cni-321572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:enable-default-cni-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:34:53.409220   75222 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:34:53.409284   75222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:34:53.447681   75222 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 18:34:53.447760   75222 ssh_runner.go:195] Run: which lz4
	I0819 18:34:53.453320   75222 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:34:53.458920   75222 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:34:53.458961   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 18:34:54.831139   75222 crio.go:462] duration metric: took 1.377858854s to copy over tarball
	I0819 18:34:54.831203   75222 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:34:53.840363   73911 node_ready.go:53] node "custom-flannel-321572" has status "Ready":"False"
	I0819 18:34:55.339927   73911 node_ready.go:49] node "custom-flannel-321572" has status "Ready":"True"
	I0819 18:34:55.339959   73911 node_ready.go:38] duration metric: took 8.005184805s for node "custom-flannel-321572" to be "Ready" ...
	I0819 18:34:55.339972   73911 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:34:55.352947   73911 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-lzdmc" in "kube-system" namespace to be "Ready" ...
	I0819 18:34:57.285726   75222 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.45448164s)
	I0819 18:34:57.285764   75222 crio.go:469] duration metric: took 2.454601429s to extract the tarball
	I0819 18:34:57.285773   75222 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:34:57.325529   75222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:34:57.375142   75222 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:34:57.375174   75222 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:34:57.375185   75222 kubeadm.go:934] updating node { 192.168.50.233 8443 v1.31.0 crio true true} ...
	I0819 18:34:57.375341   75222 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-321572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0819 18:34:57.375434   75222 ssh_runner.go:195] Run: crio config
	I0819 18:34:57.425315   75222 cni.go:84] Creating CNI manager for "bridge"
	I0819 18:34:57.425342   75222 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:34:57.425369   75222 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.233 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-321572 NodeName:enable-default-cni-321572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:34:57.425543   75222 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-321572"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:34:57.425613   75222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:34:57.436369   75222 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:34:57.436504   75222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:34:57.446165   75222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0819 18:34:57.463637   75222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:34:57.480637   75222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0819 18:34:57.498844   75222 ssh_runner.go:195] Run: grep 192.168.50.233	control-plane.minikube.internal$ /etc/hosts
	I0819 18:34:57.503262   75222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:34:57.516237   75222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:34:57.671185   75222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:34:57.688855   75222 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572 for IP: 192.168.50.233
	I0819 18:34:57.688881   75222 certs.go:194] generating shared ca certs ...
	I0819 18:34:57.688899   75222 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:34:57.689062   75222 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 18:34:57.689137   75222 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 18:34:57.689151   75222 certs.go:256] generating profile certs ...
	I0819 18:34:57.689608   75222 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.key
	I0819 18:34:57.689635   75222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt with IP's: []
	I0819 18:34:57.767907   75222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt ...
	I0819 18:34:57.767944   75222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: {Name:mk6fe0ab658ce91f67bae5ecc64da4cb5dd4c307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:34:57.768127   75222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.key ...
	I0819 18:34:57.768142   75222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.key: {Name:mk47bc5bcc9ba38353bb863c63090679e5bdfece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:34:57.768247   75222 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.key.c47f4330
	I0819 18:34:57.768269   75222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.crt.c47f4330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.233]
	I0819 18:34:57.833375   75222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.crt.c47f4330 ...
	I0819 18:34:57.833411   75222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.crt.c47f4330: {Name:mk6982fb9289e18a8d3c55b18c54aa8725407ce0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:34:57.833592   75222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.key.c47f4330 ...
	I0819 18:34:57.833609   75222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.key.c47f4330: {Name:mkf5ea029a9721f5416e10078ad44b773a2ae3e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:34:57.833692   75222 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.crt.c47f4330 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.crt
	I0819 18:34:57.833783   75222 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.key.c47f4330 -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.key
	I0819 18:34:57.833842   75222 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/proxy-client.key
	I0819 18:34:57.833858   75222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/proxy-client.crt with IP's: []
	I0819 18:34:58.213506   75222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/proxy-client.crt ...
	I0819 18:34:58.213537   75222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/proxy-client.crt: {Name:mkb4bcd1b0ffd9ccafb35f0c1c92fc9c677764a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:34:58.213723   75222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/proxy-client.key ...
	I0819 18:34:58.213736   75222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/proxy-client.key: {Name:mkcf4a781f5341568f4f43e5fe4e2073d9405d4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:34:58.213897   75222 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 18:34:58.213934   75222 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 18:34:58.213945   75222 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:34:58.213966   75222 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:34:58.213988   75222 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:34:58.214009   75222 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 18:34:58.214045   75222 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:34:58.214672   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:34:58.283693   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:34:58.310670   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:34:58.334439   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:34:58.364994   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0819 18:34:58.404584   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:34:58.433985   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:34:58.458205   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:34:58.482761   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:34:58.506866   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 18:34:58.531155   75222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 18:34:58.553978   75222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:34:58.570113   75222 ssh_runner.go:195] Run: openssl version
	I0819 18:34:58.576628   75222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:34:58.586439   75222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:34:58.590525   75222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:34:58.590583   75222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:34:58.595893   75222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:34:58.607773   75222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 18:34:58.618409   75222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 18:34:58.622408   75222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 18:34:58.622471   75222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 18:34:58.627722   75222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 18:34:58.639786   75222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 18:34:58.650612   75222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 18:34:58.654955   75222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 18:34:58.655005   75222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 18:34:58.660533   75222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:34:58.671239   75222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:34:58.675296   75222 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:34:58.675369   75222 kubeadm.go:392] StartCluster: {Name:enable-default-cni-321572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:enable-default-cni-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:34:58.675458   75222 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:34:58.675537   75222 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:34:58.711326   75222 cri.go:89] found id: ""
	I0819 18:34:58.711420   75222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:34:58.721141   75222 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:34:58.731100   75222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:34:58.740983   75222 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:34:58.741007   75222 kubeadm.go:157] found existing configuration files:
	
	I0819 18:34:58.741044   75222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:34:58.749682   75222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:34:58.749737   75222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:34:58.759118   75222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:34:58.768588   75222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:34:58.768652   75222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:34:58.777298   75222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:34:58.785850   75222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:34:58.785907   75222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:34:58.796208   75222 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:34:58.805007   75222 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:34:58.805068   75222 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:34:58.814008   75222 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:34:58.861696   75222 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:34:58.861840   75222 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:34:58.958347   75222 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:34:58.958515   75222 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:34:58.958669   75222 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:34:58.965707   75222 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:34:59.088268   75222 out.go:235]   - Generating certificates and keys ...
	I0819 18:34:59.088427   75222 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:34:59.088537   75222 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:34:59.088659   75222 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 18:34:59.237484   75222 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 18:34:59.342637   75222 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 18:34:59.538026   75222 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 18:34:59.667418   75222 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 18:34:59.667607   75222 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-321572 localhost] and IPs [192.168.50.233 127.0.0.1 ::1]
	I0819 18:34:59.732447   75222 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 18:34:59.732675   75222 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-321572 localhost] and IPs [192.168.50.233 127.0.0.1 ::1]
	I0819 18:34:59.908589   75222 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 18:35:00.064123   75222 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 18:35:00.139916   75222 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 18:35:00.140186   75222 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:35:00.284876   75222 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:35:00.388209   75222 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:35:00.543171   75222 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:35:00.869755   75222 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:35:00.987239   75222 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:35:00.987949   75222 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:35:00.990456   75222 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:34:56.624496   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:56.625040   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:34:56.625072   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:56.624984   77080 retry.go:31] will retry after 723.669396ms: waiting for machine to come up
	I0819 18:34:57.350216   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:57.350748   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:34:57.350775   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:57.350692   77080 retry.go:31] will retry after 1.483810321s: waiting for machine to come up
	I0819 18:34:58.836841   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:58.837412   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:34:58.837438   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:58.837352   77080 retry.go:31] will retry after 1.131675656s: waiting for machine to come up
	I0819 18:34:59.970543   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:34:59.971135   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:34:59.971159   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:34:59.971080   77080 retry.go:31] will retry after 2.280079448s: waiting for machine to come up
	I0819 18:35:00.992378   75222 out.go:235]   - Booting up control plane ...
	I0819 18:35:00.992495   75222 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:35:00.992626   75222 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:35:00.994607   75222 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:35:01.013857   75222 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:35:01.021494   75222 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:35:01.021586   75222 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:35:01.166334   75222 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:35:01.166531   75222 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:34:57.587109   73911 pod_ready.go:103] pod "coredns-6f6b679f8f-lzdmc" in "kube-system" namespace has status "Ready":"False"
	I0819 18:34:59.860903   73911 pod_ready.go:103] pod "coredns-6f6b679f8f-lzdmc" in "kube-system" namespace has status "Ready":"False"
	I0819 18:35:02.252720   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:35:02.253312   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:35:02.253348   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:35:02.253247   77080 retry.go:31] will retry after 2.369313804s: waiting for machine to come up
	I0819 18:35:04.623807   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:35:04.624227   76968 main.go:141] libmachine: (flannel-321572) DBG | unable to find current IP address of domain flannel-321572 in network mk-flannel-321572
	I0819 18:35:04.624251   76968 main.go:141] libmachine: (flannel-321572) DBG | I0819 18:35:04.624204   77080 retry.go:31] will retry after 3.391052127s: waiting for machine to come up
	I0819 18:35:01.668490   75222 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.084759ms
	I0819 18:35:01.668637   75222 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:35:02.360098   73911 pod_ready.go:103] pod "coredns-6f6b679f8f-lzdmc" in "kube-system" namespace has status "Ready":"False"
	I0819 18:35:04.361251   73911 pod_ready.go:103] pod "coredns-6f6b679f8f-lzdmc" in "kube-system" namespace has status "Ready":"False"
	I0819 18:35:06.671340   75222 kubeadm.go:310] [api-check] The API server is healthy after 5.002377796s
	I0819 18:35:06.688717   75222 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:35:06.704424   75222 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:35:06.727445   75222 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:35:06.727723   75222 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-321572 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:35:06.740461   75222 kubeadm.go:310] [bootstrap-token] Using token: 2kix1j.0ley20k8iunonlao
	I0819 18:35:06.742006   75222 out.go:235]   - Configuring RBAC rules ...
	I0819 18:35:06.742151   75222 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:35:06.747332   75222 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:35:06.753988   75222 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:35:06.759992   75222 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:35:06.763135   75222 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:35:06.766423   75222 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:35:07.084774   75222 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:35:07.527716   75222 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:35:08.079270   75222 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:35:08.079303   75222 kubeadm.go:310] 
	I0819 18:35:08.079423   75222 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:35:08.079446   75222 kubeadm.go:310] 
	I0819 18:35:08.079587   75222 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:35:08.079596   75222 kubeadm.go:310] 
	I0819 18:35:08.079631   75222 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:35:08.079720   75222 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:35:08.079807   75222 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:35:08.079823   75222 kubeadm.go:310] 
	I0819 18:35:08.079897   75222 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:35:08.079911   75222 kubeadm.go:310] 
	I0819 18:35:08.079984   75222 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:35:08.079997   75222 kubeadm.go:310] 
	I0819 18:35:08.080073   75222 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:35:08.080182   75222 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:35:08.080293   75222 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:35:08.080318   75222 kubeadm.go:310] 
	I0819 18:35:08.080446   75222 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:35:08.080550   75222 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:35:08.080563   75222 kubeadm.go:310] 
	I0819 18:35:08.080691   75222 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2kix1j.0ley20k8iunonlao \
	I0819 18:35:08.080849   75222 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:35:08.080884   75222 kubeadm.go:310] 	--control-plane 
	I0819 18:35:08.080892   75222 kubeadm.go:310] 
	I0819 18:35:08.080987   75222 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:35:08.080997   75222 kubeadm.go:310] 
	I0819 18:35:08.081100   75222 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2kix1j.0ley20k8iunonlao \
	I0819 18:35:08.081235   75222 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:35:08.081784   75222 kubeadm.go:310] W0819 18:34:58.814725     852 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:35:08.082131   75222 kubeadm.go:310] W0819 18:34:58.815578     852 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:35:08.082254   75222 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:35:08.082278   75222 cni.go:84] Creating CNI manager for "bridge"
	I0819 18:35:08.084198   75222 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.451941970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba20e566-1fcc-43d7-adb5-9646773d5c89 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.453084991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=601e2ca2-33af-4145-957f-f055a77740ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.453481419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092509453459787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=601e2ca2-33af-4145-957f-f055a77740ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.453985019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a1ddd00-e351-4fa8-82a0-25d0d2a72790 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.454046342Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a1ddd00-e351-4fa8-82a0-25d0d2a72790 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.454247241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3faf70767cdd17e24d6e1b4db0567223e255470076752063a77581e145a0dc8,PodSandboxId:7a554e7e3cbbc9d3c1415fd8b7008f96a99f5f26366a33f6f069f28f9ffb21c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091964922716454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d63f30-45fd-48f4-973e-6a72cf931b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4022599b0f0e3ce749dd86ffc596158a3475c478feb4f0eb263a491a6b0516da,PodSandboxId:f18d0c432227a7ac31d7293efb3cfa298ee31a157efa522eab7ae37a7c662b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963919466536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-274qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af408da7-683b-4730-b836-a5ae446e84d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc90a845e481d8f633afeb08081ffc98aff79f486ca4da983c0445cb9fe2d7ae,PodSandboxId:3985f838704b1effb6ccc0a0b182f26a9c15766fe86e18f396a0b1a4facc3c03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963633859255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j764j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
26e772d-dd20-4427-b8b2-40422b5be1ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723539f4118447a68b54db513e910e6ab32f3d95e1aed93954cd7d2d773b8d,PodSandboxId:58454aa433bddcf369586bf70d4c6791b7e21f6de548bf04495ae8c717b8fdf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724091963045903192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f004f8f-d49f-468e-acac-a7d691c9cdba,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc556da0574245e8e95d984662de27da7b8afd3e9298b5e7153c0c96fae3de03,PodSandboxId:887216af0d85d3e17893b213189d59f83986e97f76179ea18b889fa795c71d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091952378966636,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 584eb78fa73054250a13e68afac29f82,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d45d5ec1be7cbf5551fde3dc6de4df5580c09d131c4d9259b568cc502ab095,PodSandboxId:fca70f617a3a15b55ccca38784eae2141a94e46e5e3f43598cf737c310746ce5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091952349629182,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94116d3e73bcb15d654acb9df8f832aa6eb95e63312abc9bf6bf8b592d9ce7d7,PodSandboxId:0322d593e3c29deb75ce6ea0cadbae1b701ca4dd7d848e6d9c6f7c6ebc81041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091952282550985,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61941e45b337edba2e6d09e2044800d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd452eae270cdc3d1687ad1a6f93ed1d8d44f3ccba8984d03d1c42608660263c,PodSandboxId:e63256173f447a4709e23d5a577b3383b611e43247b0d254d3e56a92169815a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091952299578879,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef10e3f64821ad739cb86e41c4230360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bcd811e39e2bea53c5abb19fee57148fa6be86be809d6756d052f4f11e29cc7,PodSandboxId:b156d94d8add233e03bc73b725f52da52755d62d3eeea5a87b8e606b0b45125d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091663846249517,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a1ddd00-e351-4fa8-82a0-25d0d2a72790 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.494150994Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f3c6ad3-7d5a-4920-af0c-b15f5404e146 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.494241470Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f3c6ad3-7d5a-4920-af0c-b15f5404e146 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.495445621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a456dba2-a427-4215-a1db-b306ab1f8786 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.495890504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092509495865463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a456dba2-a427-4215-a1db-b306ab1f8786 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.496410808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0a6fd6c-4438-4b4f-9289-13a4e82f7964 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.496480449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0a6fd6c-4438-4b4f-9289-13a4e82f7964 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.496727108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3faf70767cdd17e24d6e1b4db0567223e255470076752063a77581e145a0dc8,PodSandboxId:7a554e7e3cbbc9d3c1415fd8b7008f96a99f5f26366a33f6f069f28f9ffb21c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091964922716454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d63f30-45fd-48f4-973e-6a72cf931b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4022599b0f0e3ce749dd86ffc596158a3475c478feb4f0eb263a491a6b0516da,PodSandboxId:f18d0c432227a7ac31d7293efb3cfa298ee31a157efa522eab7ae37a7c662b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963919466536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-274qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af408da7-683b-4730-b836-a5ae446e84d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc90a845e481d8f633afeb08081ffc98aff79f486ca4da983c0445cb9fe2d7ae,PodSandboxId:3985f838704b1effb6ccc0a0b182f26a9c15766fe86e18f396a0b1a4facc3c03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963633859255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j764j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
26e772d-dd20-4427-b8b2-40422b5be1ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723539f4118447a68b54db513e910e6ab32f3d95e1aed93954cd7d2d773b8d,PodSandboxId:58454aa433bddcf369586bf70d4c6791b7e21f6de548bf04495ae8c717b8fdf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724091963045903192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f004f8f-d49f-468e-acac-a7d691c9cdba,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc556da0574245e8e95d984662de27da7b8afd3e9298b5e7153c0c96fae3de03,PodSandboxId:887216af0d85d3e17893b213189d59f83986e97f76179ea18b889fa795c71d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091952378966636,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 584eb78fa73054250a13e68afac29f82,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d45d5ec1be7cbf5551fde3dc6de4df5580c09d131c4d9259b568cc502ab095,PodSandboxId:fca70f617a3a15b55ccca38784eae2141a94e46e5e3f43598cf737c310746ce5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091952349629182,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94116d3e73bcb15d654acb9df8f832aa6eb95e63312abc9bf6bf8b592d9ce7d7,PodSandboxId:0322d593e3c29deb75ce6ea0cadbae1b701ca4dd7d848e6d9c6f7c6ebc81041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091952282550985,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61941e45b337edba2e6d09e2044800d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd452eae270cdc3d1687ad1a6f93ed1d8d44f3ccba8984d03d1c42608660263c,PodSandboxId:e63256173f447a4709e23d5a577b3383b611e43247b0d254d3e56a92169815a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091952299578879,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef10e3f64821ad739cb86e41c4230360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bcd811e39e2bea53c5abb19fee57148fa6be86be809d6756d052f4f11e29cc7,PodSandboxId:b156d94d8add233e03bc73b725f52da52755d62d3eeea5a87b8e606b0b45125d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091663846249517,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0a6fd6c-4438-4b4f-9289-13a4e82f7964 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.520303971Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9ea49436-3c15-4a1b-8bef-97e61cd520fc name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.520749913Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7a554e7e3cbbc9d3c1415fd8b7008f96a99f5f26366a33f6f069f28f9ffb21c9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:26d63f30-45fd-48f4-973e-6a72cf931b9d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091964821210966,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d63f30-45fd-48f4-973e-6a72cf931b9d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-19T18:26:04.510029589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:923f5bbdccbf220daf9a4cd88b6aff2db9b4cf759b9a7b852c59cd16ba8f423f,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-j8qbw,Uid:6c7ec046-01e2-4903-9937-c79aabc81bb2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091964667482671,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-j8qbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ec046-01e2-4903-9937-c79aabc81bb
2,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:26:04.361325271Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f18d0c432227a7ac31d7293efb3cfa298ee31a157efa522eab7ae37a7c662b45,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-274qq,Uid:af408da7-683b-4730-b836-a5ae446e84d4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091963033498270,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-274qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af408da7-683b-4730-b836-a5ae446e84d4,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:26:02.723511264Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3985f838704b1effb6ccc0a0b182f26a9c15766fe86e18f396a0b1a4facc3c03,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-j764j,Uid:726e772d-dd20-4427
-b8b2-40422b5be1ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091963031058924,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-j764j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726e772d-dd20-4427-b8b2-40422b5be1ef,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:26:02.695433875Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:58454aa433bddcf369586bf70d4c6791b7e21f6de548bf04495ae8c717b8fdf6,Metadata:&PodSandboxMetadata{Name:kube-proxy-df5kf,Uid:0f004f8f-d49f-468e-acac-a7d691c9cdba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091962857367234,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-df5kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f004f8f-d49f-468e-acac-a7d691c9cdba,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:26:02.547507824Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fca70f617a3a15b55ccca38784eae2141a94e46e5e3f43598cf737c310746ce5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-306581,Uid:aabf286bc9c738fac48e9947f3fc0100,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091952130021886,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.181:8443,kubernetes.io/config.hash: aabf286bc9c738fac48e9947f3fc0100,kubernetes.io/config.seen: 2024-08-19T18:25:51.674524755Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e63256173f447a4709e23d5a577b
3383b611e43247b0d254d3e56a92169815a6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-306581,Uid:ef10e3f64821ad739cb86e41c4230360,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091952128024771,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef10e3f64821ad739cb86e41c4230360,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ef10e3f64821ad739cb86e41c4230360,kubernetes.io/config.seen: 2024-08-19T18:25:51.674526946Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:887216af0d85d3e17893b213189d59f83986e97f76179ea18b889fa795c71d21,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-306581,Uid:584eb78fa73054250a13e68afac29f82,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091952125852315,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,
io.kubernetes.pod.name: etcd-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 584eb78fa73054250a13e68afac29f82,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.181:2379,kubernetes.io/config.hash: 584eb78fa73054250a13e68afac29f82,kubernetes.io/config.seen: 2024-08-19T18:25:51.674520273Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0322d593e3c29deb75ce6ea0cadbae1b701ca4dd7d848e6d9c6f7c6ebc81041b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-306581,Uid:d61941e45b337edba2e6d09e2044800d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091952123544004,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61941e45b337edba2e6d09e2044800d,tier: control-plane,},Annotations:map[str
ing]string{kubernetes.io/config.hash: d61941e45b337edba2e6d09e2044800d,kubernetes.io/config.seen: 2024-08-19T18:25:51.674525794Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b156d94d8add233e03bc73b725f52da52755d62d3eeea5a87b8e606b0b45125d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-306581,Uid:aabf286bc9c738fac48e9947f3fc0100,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724091662900193981,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.181:8443,kubernetes.io/config.hash: aabf286bc9c738fac48e9947f3fc0100,kubernetes.io/config.seen: 2024-08-19T18:21:02.354619975Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=9ea49436-3c15-4a1b-8bef-97e61cd520fc name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.521382545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f82d9bd0-7819-4ff7-94e6-8a41961de5cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.521451337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f82d9bd0-7819-4ff7-94e6-8a41961de5cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.521630729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3faf70767cdd17e24d6e1b4db0567223e255470076752063a77581e145a0dc8,PodSandboxId:7a554e7e3cbbc9d3c1415fd8b7008f96a99f5f26366a33f6f069f28f9ffb21c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091964922716454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d63f30-45fd-48f4-973e-6a72cf931b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4022599b0f0e3ce749dd86ffc596158a3475c478feb4f0eb263a491a6b0516da,PodSandboxId:f18d0c432227a7ac31d7293efb3cfa298ee31a157efa522eab7ae37a7c662b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963919466536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-274qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af408da7-683b-4730-b836-a5ae446e84d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc90a845e481d8f633afeb08081ffc98aff79f486ca4da983c0445cb9fe2d7ae,PodSandboxId:3985f838704b1effb6ccc0a0b182f26a9c15766fe86e18f396a0b1a4facc3c03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963633859255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j764j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
26e772d-dd20-4427-b8b2-40422b5be1ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723539f4118447a68b54db513e910e6ab32f3d95e1aed93954cd7d2d773b8d,PodSandboxId:58454aa433bddcf369586bf70d4c6791b7e21f6de548bf04495ae8c717b8fdf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724091963045903192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f004f8f-d49f-468e-acac-a7d691c9cdba,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc556da0574245e8e95d984662de27da7b8afd3e9298b5e7153c0c96fae3de03,PodSandboxId:887216af0d85d3e17893b213189d59f83986e97f76179ea18b889fa795c71d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091952378966636,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 584eb78fa73054250a13e68afac29f82,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d45d5ec1be7cbf5551fde3dc6de4df5580c09d131c4d9259b568cc502ab095,PodSandboxId:fca70f617a3a15b55ccca38784eae2141a94e46e5e3f43598cf737c310746ce5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091952349629182,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94116d3e73bcb15d654acb9df8f832aa6eb95e63312abc9bf6bf8b592d9ce7d7,PodSandboxId:0322d593e3c29deb75ce6ea0cadbae1b701ca4dd7d848e6d9c6f7c6ebc81041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091952282550985,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61941e45b337edba2e6d09e2044800d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd452eae270cdc3d1687ad1a6f93ed1d8d44f3ccba8984d03d1c42608660263c,PodSandboxId:e63256173f447a4709e23d5a577b3383b611e43247b0d254d3e56a92169815a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091952299578879,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef10e3f64821ad739cb86e41c4230360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bcd811e39e2bea53c5abb19fee57148fa6be86be809d6756d052f4f11e29cc7,PodSandboxId:b156d94d8add233e03bc73b725f52da52755d62d3eeea5a87b8e606b0b45125d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091663846249517,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f82d9bd0-7819-4ff7-94e6-8a41961de5cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.530879360Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6c48bdf-71a7-4518-9cff-d9b805013556 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.530943657Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6c48bdf-71a7-4518-9cff-d9b805013556 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.532153093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3c927a0-10c4-471f-9349-1b501f4c7703 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.532636643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092509532615907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3c927a0-10c4-471f-9349-1b501f4c7703 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.533257246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74542174-12c0-4758-baad-04fe0981ec56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.533328175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74542174-12c0-4758-baad-04fe0981ec56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:35:09 embed-certs-306581 crio[728]: time="2024-08-19 18:35:09.533523279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3faf70767cdd17e24d6e1b4db0567223e255470076752063a77581e145a0dc8,PodSandboxId:7a554e7e3cbbc9d3c1415fd8b7008f96a99f5f26366a33f6f069f28f9ffb21c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091964922716454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d63f30-45fd-48f4-973e-6a72cf931b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4022599b0f0e3ce749dd86ffc596158a3475c478feb4f0eb263a491a6b0516da,PodSandboxId:f18d0c432227a7ac31d7293efb3cfa298ee31a157efa522eab7ae37a7c662b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963919466536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-274qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af408da7-683b-4730-b836-a5ae446e84d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc90a845e481d8f633afeb08081ffc98aff79f486ca4da983c0445cb9fe2d7ae,PodSandboxId:3985f838704b1effb6ccc0a0b182f26a9c15766fe86e18f396a0b1a4facc3c03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963633859255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j764j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
26e772d-dd20-4427-b8b2-40422b5be1ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723539f4118447a68b54db513e910e6ab32f3d95e1aed93954cd7d2d773b8d,PodSandboxId:58454aa433bddcf369586bf70d4c6791b7e21f6de548bf04495ae8c717b8fdf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724091963045903192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f004f8f-d49f-468e-acac-a7d691c9cdba,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc556da0574245e8e95d984662de27da7b8afd3e9298b5e7153c0c96fae3de03,PodSandboxId:887216af0d85d3e17893b213189d59f83986e97f76179ea18b889fa795c71d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091952378966636,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 584eb78fa73054250a13e68afac29f82,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d45d5ec1be7cbf5551fde3dc6de4df5580c09d131c4d9259b568cc502ab095,PodSandboxId:fca70f617a3a15b55ccca38784eae2141a94e46e5e3f43598cf737c310746ce5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091952349629182,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94116d3e73bcb15d654acb9df8f832aa6eb95e63312abc9bf6bf8b592d9ce7d7,PodSandboxId:0322d593e3c29deb75ce6ea0cadbae1b701ca4dd7d848e6d9c6f7c6ebc81041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091952282550985,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61941e45b337edba2e6d09e2044800d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd452eae270cdc3d1687ad1a6f93ed1d8d44f3ccba8984d03d1c42608660263c,PodSandboxId:e63256173f447a4709e23d5a577b3383b611e43247b0d254d3e56a92169815a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091952299578879,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef10e3f64821ad739cb86e41c4230360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bcd811e39e2bea53c5abb19fee57148fa6be86be809d6756d052f4f11e29cc7,PodSandboxId:b156d94d8add233e03bc73b725f52da52755d62d3eeea5a87b8e606b0b45125d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091663846249517,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74542174-12c0-4758-baad-04fe0981ec56 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a3faf70767cdd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   7a554e7e3cbbc       storage-provisioner
	4022599b0f0e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   f18d0c432227a       coredns-6f6b679f8f-274qq
	bc90a845e481d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3985f838704b1       coredns-6f6b679f8f-j764j
	29723539f4118       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   58454aa433bdd       kube-proxy-df5kf
	bc556da057424       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   887216af0d85d       etcd-embed-certs-306581
	c5d45d5ec1be7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   fca70f617a3a1       kube-apiserver-embed-certs-306581
	dd452eae270cd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   e63256173f447       kube-scheduler-embed-certs-306581
	94116d3e73bcb       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   0322d593e3c29       kube-controller-manager-embed-certs-306581
	2bcd811e39e2b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   b156d94d8add2       kube-apiserver-embed-certs-306581
	
	
	==> coredns [4022599b0f0e3ce749dd86ffc596158a3475c478feb4f0eb263a491a6b0516da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bc90a845e481d8f633afeb08081ffc98aff79f486ca4da983c0445cb9fe2d7ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-306581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-306581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=embed-certs-306581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_25_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:25:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-306581
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:35:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:31:13 +0000   Mon, 19 Aug 2024 18:25:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:31:13 +0000   Mon, 19 Aug 2024 18:25:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:31:13 +0000   Mon, 19 Aug 2024 18:25:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:31:13 +0000   Mon, 19 Aug 2024 18:25:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.181
	  Hostname:    embed-certs-306581
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c22361cf51d4549af6a9956c518d00d
	  System UUID:                1c22361c-f51d-4549-af6a-9956c518d00d
	  Boot ID:                    c25cae55-8312-4340-b9c6-45c51f945434
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-274qq                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 coredns-6f6b679f8f-j764j                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 etcd-embed-certs-306581                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m13s
	  kube-system                 kube-apiserver-embed-certs-306581             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-controller-manager-embed-certs-306581    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-proxy-df5kf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	  kube-system                 kube-scheduler-embed-certs-306581             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 metrics-server-6867b74b74-j8qbw               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m5s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m12s  kubelet          Node embed-certs-306581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m12s  kubelet          Node embed-certs-306581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m12s  kubelet          Node embed-certs-306581 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s   node-controller  Node embed-certs-306581 event: Registered Node embed-certs-306581 in Controller
	
	
	==> dmesg <==
	[  +0.051072] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038756] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.785033] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.870204] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.509295] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.556848] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.060858] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073459] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.167777] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.137783] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.279785] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +3.957657] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[Aug19 18:21] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +0.062184] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.714338] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.367881] kauditd_printk_skb: 85 callbacks suppressed
	[Aug19 18:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.551820] systemd-fstab-generator[2559]: Ignoring "noauto" option for root device
	[  +4.663905] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.395018] systemd-fstab-generator[2880]: Ignoring "noauto" option for root device
	[Aug19 18:26] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.088043] systemd-fstab-generator[3025]: Ignoring "noauto" option for root device
	[Aug19 18:27] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [bc556da0574245e8e95d984662de27da7b8afd3e9298b5e7153c0c96fae3de03] <==
	{"level":"info","ts":"2024-08-19T18:33:22.263952Z","caller":"traceutil/trace.go:171","msg":"trace[1739782408] linearizableReadLoop","detail":"{readStateIndex:906; appliedIndex:905; }","duration":"274.659937ms","start":"2024-08-19T18:33:21.989271Z","end":"2024-08-19T18:33:22.263931Z","steps":["trace[1739782408] 'read index received'  (duration: 33.35µs)","trace[1739782408] 'applied index is now lower than readState.Index'  (duration: 274.625156ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:33:22.264059Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"330.394713ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:33:22.264125Z","caller":"traceutil/trace.go:171","msg":"trace[176011399] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:803; }","duration":"330.469744ms","start":"2024-08-19T18:33:21.933644Z","end":"2024-08-19T18:33:22.264114Z","steps":["trace[176011399] 'range keys from in-memory index tree'  (duration: 330.383063ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:33:22.264212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.928615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2024-08-19T18:33:22.264269Z","caller":"traceutil/trace.go:171","msg":"trace[1828139161] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:803; }","duration":"274.989017ms","start":"2024-08-19T18:33:21.989266Z","end":"2024-08-19T18:33:22.264255Z","steps":["trace[1828139161] 'agreement among raft nodes before linearized reading'  (duration: 274.755106ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:33:22.625296Z","caller":"traceutil/trace.go:171","msg":"trace[1561824951] linearizableReadLoop","detail":"{readStateIndex:907; appliedIndex:906; }","duration":"146.827886ms","start":"2024-08-19T18:33:22.478454Z","end":"2024-08-19T18:33:22.625282Z","steps":["trace[1561824951] 'read index received'  (duration: 146.68323ms)","trace[1561824951] 'applied index is now lower than readState.Index'  (duration: 144.046µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:33:22.625439Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.969364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:33:22.625479Z","caller":"traceutil/trace.go:171","msg":"trace[800172507] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:804; }","duration":"147.022572ms","start":"2024-08-19T18:33:22.478450Z","end":"2024-08-19T18:33:22.625472Z","steps":["trace[800172507] 'agreement among raft nodes before linearized reading'  (duration: 146.953846ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:33:22.625555Z","caller":"traceutil/trace.go:171","msg":"trace[609087219] transaction","detail":"{read_only:false; response_revision:804; number_of_response:1; }","duration":"356.61694ms","start":"2024-08-19T18:33:22.268923Z","end":"2024-08-19T18:33:22.625540Z","steps":["trace[609087219] 'process raft request'  (duration: 356.226619ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:33:22.626996Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:33:22.268909Z","time spent":"357.94134ms","remote":"127.0.0.1:39766","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:803 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-19T18:33:23.247877Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.573611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:33:23.247955Z","caller":"traceutil/trace.go:171","msg":"trace[1111253867] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:804; }","duration":"351.672375ms","start":"2024-08-19T18:33:22.896268Z","end":"2024-08-19T18:33:23.247941Z","steps":["trace[1111253867] 'range keys from in-memory index tree'  (duration: 351.492319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:33:23.248007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:33:22.896228Z","time spent":"351.766967ms","remote":"127.0.0.1:39584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-19T18:33:23.248637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.521716ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:33:23.248809Z","caller":"traceutil/trace.go:171","msg":"trace[1206272449] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:804; }","duration":"315.748684ms","start":"2024-08-19T18:33:22.933042Z","end":"2024-08-19T18:33:23.248790Z","steps":["trace[1206272449] 'range keys from in-memory index tree'  (duration: 314.303955ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:33:26.735159Z","caller":"traceutil/trace.go:171","msg":"trace[1289184813] transaction","detail":"{read_only:false; response_revision:807; number_of_response:1; }","duration":"221.139756ms","start":"2024-08-19T18:33:26.513996Z","end":"2024-08-19T18:33:26.735136Z","steps":["trace[1289184813] 'process raft request'  (duration: 121.485792ms)","trace[1289184813] 'compare'  (duration: 99.498927ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:33:26.860879Z","caller":"traceutil/trace.go:171","msg":"trace[642430515] transaction","detail":"{read_only:false; response_revision:808; number_of_response:1; }","duration":"119.0432ms","start":"2024-08-19T18:33:26.741819Z","end":"2024-08-19T18:33:26.860862Z","steps":["trace[642430515] 'process raft request'  (duration: 117.206405ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:33:55.309974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.352308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:33:55.310134Z","caller":"traceutil/trace.go:171","msg":"trace[852851575] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:829; }","duration":"413.549255ms","start":"2024-08-19T18:33:54.896565Z","end":"2024-08-19T18:33:55.310114Z","steps":["trace[852851575] 'range keys from in-memory index tree'  (duration: 413.240636ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:33:55.310192Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:33:54.896528Z","time spent":"413.645391ms","remote":"127.0.0.1:39584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-19T18:33:55.310482Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"377.337032ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:33:55.310541Z","caller":"traceutil/trace.go:171","msg":"trace[622923905] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:829; }","duration":"377.413324ms","start":"2024-08-19T18:33:54.933116Z","end":"2024-08-19T18:33:55.310530Z","steps":["trace[622923905] 'range keys from in-memory index tree'  (duration: 377.324383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:33:55.311075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.329428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2024-08-19T18:33:55.311134Z","caller":"traceutil/trace.go:171","msg":"trace[1338518563] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:829; }","duration":"221.391187ms","start":"2024-08-19T18:33:55.089729Z","end":"2024-08-19T18:33:55.311121Z","steps":["trace[1338518563] 'range keys from in-memory index tree'  (duration: 221.142398ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:34:57.939513Z","caller":"traceutil/trace.go:171","msg":"trace[594994971] transaction","detail":"{read_only:false; response_revision:880; number_of_response:1; }","duration":"113.046467ms","start":"2024-08-19T18:34:57.826433Z","end":"2024-08-19T18:34:57.939479Z","steps":["trace[594994971] 'process raft request'  (duration: 112.450689ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:35:09 up 14 min,  0 users,  load average: 0.38, 0.27, 0.19
	Linux embed-certs-306581 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2bcd811e39e2bea53c5abb19fee57148fa6be86be809d6756d052f4f11e29cc7] <==
	W0819 18:25:44.077467       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.101120       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.143334       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.203187       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.220603       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.262977       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.279429       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.321169       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.339556       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.363783       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.377577       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.426959       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.450512       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.488327       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:45.022907       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:47.984070       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:48.400636       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:48.590382       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.005032       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.020647       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.139340       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.170970       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.190647       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.227954       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.271615       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c5d45d5ec1be7cbf5551fde3dc6de4df5580c09d131c4d9259b568cc502ab095] <==
	W0819 18:30:55.847979       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:30:55.848056       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 18:30:55.849266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:30:55.849358       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:31:55.850388       1 handler_proxy.go:99] no RequestInfo found in the context
	W0819 18:31:55.850389       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:31:55.850706       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0819 18:31:55.850769       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 18:31:55.851965       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:31:55.852044       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:33:55.852380       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:33:55.852485       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 18:33:55.852380       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:33:55.852573       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 18:33:55.853735       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:33:55.853786       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [94116d3e73bcb15d654acb9df8f832aa6eb95e63312abc9bf6bf8b592d9ce7d7] <==
	E0819 18:30:01.828882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:30:02.297493       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:30:31.836131       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:30:32.306630       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:31:01.843411       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:31:02.317379       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:31:13.782776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-306581"
	E0819 18:31:31.849130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:31:32.334955       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:31:55.389205       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="392.382µs"
	E0819 18:32:01.855137       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:32:02.342644       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:32:07.395651       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="59.275µs"
	E0819 18:32:31.862055       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:32:32.353911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:33:01.870969       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:33:02.362598       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:33:31.878288       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:33:32.375857       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:34:01.887013       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:34:02.383561       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:34:31.896735       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:34:32.393367       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:35:01.905065       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:35:02.403726       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [29723539f4118447a68b54db513e910e6ab32f3d95e1aed93954cd7d2d773b8d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:26:03.471162       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:26:03.500535       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.181"]
	E0819 18:26:03.500637       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:26:03.641315       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:26:03.641377       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:26:03.641405       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:26:03.655482       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:26:03.655769       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:26:03.655793       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:26:03.667408       1 config.go:197] "Starting service config controller"
	I0819 18:26:03.667454       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:26:03.667519       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:26:03.667526       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:26:03.668342       1 config.go:326] "Starting node config controller"
	I0819 18:26:03.668364       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:26:03.768785       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:26:03.768849       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:26:03.768962       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dd452eae270cdc3d1687ad1a6f93ed1d8d44f3ccba8984d03d1c42608660263c] <==
	W0819 18:25:54.907337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 18:25:54.909344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:54.907404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:25:54.909360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:55.846838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:25:55.846888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:55.848325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 18:25:55.848369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:55.880099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 18:25:55.880148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:55.890742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:25:55.890787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:55.981892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:25:55.981994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:56.053424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 18:25:56.053544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:56.078368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:25:56.078464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:56.137809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:25:56.137896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:56.148007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:25:56.148147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:56.348004       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 18:25:56.348102       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 18:25:59.488765       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:33:57 embed-certs-306581 kubelet[2887]: E0819 18:33:57.565751    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092437565055061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:07 embed-certs-306581 kubelet[2887]: E0819 18:34:07.568138    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092447567625277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:07 embed-certs-306581 kubelet[2887]: E0819 18:34:07.568492    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092447567625277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:08 embed-certs-306581 kubelet[2887]: E0819 18:34:08.366856    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:34:17 embed-certs-306581 kubelet[2887]: E0819 18:34:17.571135    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092457570540684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:17 embed-certs-306581 kubelet[2887]: E0819 18:34:17.571411    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092457570540684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:21 embed-certs-306581 kubelet[2887]: E0819 18:34:21.370647    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:34:27 embed-certs-306581 kubelet[2887]: E0819 18:34:27.574498    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092467574032741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:27 embed-certs-306581 kubelet[2887]: E0819 18:34:27.575122    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092467574032741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:35 embed-certs-306581 kubelet[2887]: E0819 18:34:35.366528    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:34:37 embed-certs-306581 kubelet[2887]: E0819 18:34:37.577815    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092477577248408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:37 embed-certs-306581 kubelet[2887]: E0819 18:34:37.578253    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092477577248408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:46 embed-certs-306581 kubelet[2887]: E0819 18:34:46.366448    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:34:47 embed-certs-306581 kubelet[2887]: E0819 18:34:47.581262    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092487580635073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:47 embed-certs-306581 kubelet[2887]: E0819 18:34:47.581796    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092487580635073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:57 embed-certs-306581 kubelet[2887]: E0819 18:34:57.381507    2887 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:34:57 embed-certs-306581 kubelet[2887]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:34:57 embed-certs-306581 kubelet[2887]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:34:57 embed-certs-306581 kubelet[2887]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:34:57 embed-certs-306581 kubelet[2887]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:34:57 embed-certs-306581 kubelet[2887]: E0819 18:34:57.584579    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092497583909819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:34:57 embed-certs-306581 kubelet[2887]: E0819 18:34:57.584700    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092497583909819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:35:01 embed-certs-306581 kubelet[2887]: E0819 18:35:01.366356    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:35:07 embed-certs-306581 kubelet[2887]: E0819 18:35:07.587176    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092507586617949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:35:07 embed-certs-306581 kubelet[2887]: E0819 18:35:07.587727    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092507586617949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [a3faf70767cdd17e24d6e1b4db0567223e255470076752063a77581e145a0dc8] <==
	I0819 18:26:05.004304       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:26:05.014324       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:26:05.014540       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:26:05.024230       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:26:05.024396       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-306581_0f3bf2ec-21f3-43f5-92a4-a50b19d57be5!
	I0819 18:26:05.025861       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a48ae1f6-d14d-4f6a-8344-3fcd841841fe", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-306581_0f3bf2ec-21f3-43f5-92a4-a50b19d57be5 became leader
	I0819 18:26:05.126900       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-306581_0f3bf2ec-21f3-43f5-92a4-a50b19d57be5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-306581 -n embed-certs-306581
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-306581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-j8qbw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-306581 describe pod metrics-server-6867b74b74-j8qbw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-306581 describe pod metrics-server-6867b74b74-j8qbw: exit status 1 (64.417808ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-j8qbw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-306581 describe pod metrics-server-6867b74b74-j8qbw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (323.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-233969 -n no-preload-233969
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-19 18:32:07.314417906 +0000 UTC m=+5989.018585792
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-233969 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-233969 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.952µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-233969 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233969 -n no-preload-233969
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-233969 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-233969 logs -n 25: (1.278608523s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-233969                  | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-233969                                   | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233045             | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079123        | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233045                  | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-813424       | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:16 UTC |
	|         | default-k8s-diff-port-813424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079123             | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-233045 image list                           | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-814719 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | disable-driver-mounts-814719                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-306581            | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC | 19 Aug 24 18:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-306581                 | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC | 19 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:32 UTC | 19 Aug 24 18:32 UTC |
	| start   | -p auto-321572 --memory=3072                           | auto-321572                  | jenkins | v1.33.1 | 19 Aug 24 18:32 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:32:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:32:05.354647   70919 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:32:05.354875   70919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:32:05.354882   70919 out.go:358] Setting ErrFile to fd 2...
	I0819 18:32:05.354887   70919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:32:05.355071   70919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:32:05.355659   70919 out.go:352] Setting JSON to false
	I0819 18:32:05.356578   70919 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8070,"bootTime":1724084255,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:32:05.356642   70919 start.go:139] virtualization: kvm guest
	I0819 18:32:05.359041   70919 out.go:177] * [auto-321572] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:32:05.360494   70919 notify.go:220] Checking for updates...
	I0819 18:32:05.360515   70919 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:32:05.361812   70919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:32:05.362867   70919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:32:05.364028   70919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:32:05.365172   70919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:32:05.366290   70919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:32:05.367897   70919 config.go:182] Loaded profile config "default-k8s-diff-port-813424": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:32:05.368023   70919 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:32:05.368136   70919 config.go:182] Loaded profile config "no-preload-233969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:32:05.368249   70919 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:32:05.406772   70919 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:32:05.408142   70919 start.go:297] selected driver: kvm2
	I0819 18:32:05.408171   70919 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:32:05.408193   70919 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:32:05.409192   70919 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:32:05.409310   70919 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:32:05.425597   70919 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:32:05.425643   70919 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:32:05.425843   70919 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:32:05.425907   70919 cni.go:84] Creating CNI manager for ""
	I0819 18:32:05.425921   70919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:32:05.425935   70919 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 18:32:05.425981   70919 start.go:340] cluster config:
	{Name:auto-321572 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:32:05.426066   70919 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:32:05.427855   70919 out.go:177] * Starting "auto-321572" primary control-plane node in "auto-321572" cluster
	I0819 18:32:05.429143   70919 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:32:05.429183   70919 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:32:05.429190   70919 cache.go:56] Caching tarball of preloaded images
	I0819 18:32:05.429279   70919 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:32:05.429288   70919 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:32:05.429385   70919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/config.json ...
	I0819 18:32:05.429402   70919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/config.json: {Name:mkf413fce2e251657283859833579dfe0ff7d680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:32:05.429576   70919 start.go:360] acquireMachinesLock for auto-321572: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:32:05.429615   70919 start.go:364] duration metric: took 19.125µs to acquireMachinesLock for "auto-321572"
	I0819 18:32:05.429640   70919 start.go:93] Provisioning new machine with config: &{Name:auto-321572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.0 ClusterName:auto-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:32:05.429704   70919 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.927605286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092327927581416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5344aa6-802a-4318-951c-9b61031a3bab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.928236890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48ae7d15-61d9-4c6e-aff8-3bf1d2bf65fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.928358405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48ae7d15-61d9-4c6e-aff8-3bf1d2bf65fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.928587128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a784011c1630fa737c01f73a433114678f21908c6f206e3bb584867304b1c1,PodSandboxId:b4d5818be915b99aca7ccd2c37fef2ad0ccb06443ddada2cd1bd46dd3dc1de38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091454193397783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a50087-cb20-407b-9a87-03d04d230afb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77567c11d5611c1a2c969b8a6fc34055e69a489fbc0107f5e8a20b37da5805c5,PodSandboxId:37767a2eba14bf0fcd760800859ba6d199f9b9e437a1ecb9a59ee974581f4f7d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453555346342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kdrzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db6b602-ca09-40f4-9492-93cbbf919aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8561dfaa22d9dbfaad3595ea21ed0d67f5cf819327f2f363df9e3ea60b4dcc1c,PodSandboxId:c6ad35e9012be2dea475046d15ad05506df9b9212927b7e72670205b59323451,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453465869004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vb6dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d
39fac8-0d53-4380-a903-080414848e24,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fa5dfbb43c52ad39b74aed140c756006f0f11c9a69472d8b92addff0d64b08c,PodSandboxId:2aee12971ae2867607209d69cecb9ae7f33f29a54108669835a23500837ea191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724091452901276652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt5nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd68c04-8a56-4a98-a1fb-21a194ebb5e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6e79f7543342fcc9df91a5421220a96a2eafaf71f5e76e68e905f2c8cea28e,PodSandboxId:a588363c1a4b3a4dc2693e3989d453399725b29e976bedbec900f1f7fddcd5fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091442066204690,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cfc5bf79fdfe549dd37044fd3c5166,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72417b0564131963fc9fb1346896cb455214e683357c704db318a8e7da63198,PodSandboxId:45067d098f0250c360e0a5c565093de48f05ccc6a2d5287faba18c94416613fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091442031130486,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72358b243ceb044a02ff12f886e757,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6011dd9bf6ffffd9fe22a5c5037c57e099dba666794b9a288f314e36ab7dbd,PodSandboxId:5afb929a542e01a963ef67798b3ad1c0c49db77eb44f9304c9293a3f0e1298c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091441983131010,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155f37c341f82292331395fcf8d5a6e110c77aefa45e4e23003310f74d179812,PodSandboxId:f6d7bbca3f21c8665430018c644312239b044a01a1f2ebc0f9a44375c683ada6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091441916728734,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e12b21a98fb74bba8dd48a2456ee75b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e071aa0b0c8a06fb2fa644cf6450b7ba130899af2603d895c947e6291562ca,PodSandboxId:9eb1a6b8f20d4c1ad23b967067caf781c24cc1cf19f2c34b16c33f942c69ac21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091155834445359,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48ae7d15-61d9-4c6e-aff8-3bf1d2bf65fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.974747011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f5465c9-22c0-4058-90c2-8978ea2d358d name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.974836888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f5465c9-22c0-4058-90c2-8978ea2d358d name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.976179883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86339aa9-85a3-4fcf-a38f-e58cdcb72c0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.976597517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092327976573078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86339aa9-85a3-4fcf-a38f-e58cdcb72c0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.977262863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87fc85a7-8b56-447c-b3b3-27e17f3314c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.977316249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87fc85a7-8b56-447c-b3b3-27e17f3314c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:07 no-preload-233969 crio[726]: time="2024-08-19 18:32:07.977553350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a784011c1630fa737c01f73a433114678f21908c6f206e3bb584867304b1c1,PodSandboxId:b4d5818be915b99aca7ccd2c37fef2ad0ccb06443ddada2cd1bd46dd3dc1de38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091454193397783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a50087-cb20-407b-9a87-03d04d230afb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77567c11d5611c1a2c969b8a6fc34055e69a489fbc0107f5e8a20b37da5805c5,PodSandboxId:37767a2eba14bf0fcd760800859ba6d199f9b9e437a1ecb9a59ee974581f4f7d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453555346342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kdrzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db6b602-ca09-40f4-9492-93cbbf919aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8561dfaa22d9dbfaad3595ea21ed0d67f5cf819327f2f363df9e3ea60b4dcc1c,PodSandboxId:c6ad35e9012be2dea475046d15ad05506df9b9212927b7e72670205b59323451,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453465869004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vb6dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d
39fac8-0d53-4380-a903-080414848e24,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fa5dfbb43c52ad39b74aed140c756006f0f11c9a69472d8b92addff0d64b08c,PodSandboxId:2aee12971ae2867607209d69cecb9ae7f33f29a54108669835a23500837ea191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724091452901276652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt5nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd68c04-8a56-4a98-a1fb-21a194ebb5e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6e79f7543342fcc9df91a5421220a96a2eafaf71f5e76e68e905f2c8cea28e,PodSandboxId:a588363c1a4b3a4dc2693e3989d453399725b29e976bedbec900f1f7fddcd5fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091442066204690,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cfc5bf79fdfe549dd37044fd3c5166,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72417b0564131963fc9fb1346896cb455214e683357c704db318a8e7da63198,PodSandboxId:45067d098f0250c360e0a5c565093de48f05ccc6a2d5287faba18c94416613fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091442031130486,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72358b243ceb044a02ff12f886e757,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6011dd9bf6ffffd9fe22a5c5037c57e099dba666794b9a288f314e36ab7dbd,PodSandboxId:5afb929a542e01a963ef67798b3ad1c0c49db77eb44f9304c9293a3f0e1298c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091441983131010,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155f37c341f82292331395fcf8d5a6e110c77aefa45e4e23003310f74d179812,PodSandboxId:f6d7bbca3f21c8665430018c644312239b044a01a1f2ebc0f9a44375c683ada6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091441916728734,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e12b21a98fb74bba8dd48a2456ee75b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e071aa0b0c8a06fb2fa644cf6450b7ba130899af2603d895c947e6291562ca,PodSandboxId:9eb1a6b8f20d4c1ad23b967067caf781c24cc1cf19f2c34b16c33f942c69ac21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091155834445359,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87fc85a7-8b56-447c-b3b3-27e17f3314c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.015255189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3327af3f-ecc6-497a-9f67-050b35e1e821 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.015353858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3327af3f-ecc6-497a-9f67-050b35e1e821 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.016435214Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=452d32db-316a-4126-9b8e-5ca13f1f4e61 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.016812027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092328016786120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=452d32db-316a-4126-9b8e-5ca13f1f4e61 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.017323466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d1496da-8b2a-40e3-a253-3f1e78c7e804 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.017374826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d1496da-8b2a-40e3-a253-3f1e78c7e804 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.017601190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a784011c1630fa737c01f73a433114678f21908c6f206e3bb584867304b1c1,PodSandboxId:b4d5818be915b99aca7ccd2c37fef2ad0ccb06443ddada2cd1bd46dd3dc1de38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091454193397783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a50087-cb20-407b-9a87-03d04d230afb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77567c11d5611c1a2c969b8a6fc34055e69a489fbc0107f5e8a20b37da5805c5,PodSandboxId:37767a2eba14bf0fcd760800859ba6d199f9b9e437a1ecb9a59ee974581f4f7d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453555346342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kdrzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db6b602-ca09-40f4-9492-93cbbf919aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8561dfaa22d9dbfaad3595ea21ed0d67f5cf819327f2f363df9e3ea60b4dcc1c,PodSandboxId:c6ad35e9012be2dea475046d15ad05506df9b9212927b7e72670205b59323451,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453465869004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vb6dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d
39fac8-0d53-4380-a903-080414848e24,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fa5dfbb43c52ad39b74aed140c756006f0f11c9a69472d8b92addff0d64b08c,PodSandboxId:2aee12971ae2867607209d69cecb9ae7f33f29a54108669835a23500837ea191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724091452901276652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt5nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd68c04-8a56-4a98-a1fb-21a194ebb5e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6e79f7543342fcc9df91a5421220a96a2eafaf71f5e76e68e905f2c8cea28e,PodSandboxId:a588363c1a4b3a4dc2693e3989d453399725b29e976bedbec900f1f7fddcd5fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091442066204690,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cfc5bf79fdfe549dd37044fd3c5166,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72417b0564131963fc9fb1346896cb455214e683357c704db318a8e7da63198,PodSandboxId:45067d098f0250c360e0a5c565093de48f05ccc6a2d5287faba18c94416613fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091442031130486,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72358b243ceb044a02ff12f886e757,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6011dd9bf6ffffd9fe22a5c5037c57e099dba666794b9a288f314e36ab7dbd,PodSandboxId:5afb929a542e01a963ef67798b3ad1c0c49db77eb44f9304c9293a3f0e1298c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091441983131010,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155f37c341f82292331395fcf8d5a6e110c77aefa45e4e23003310f74d179812,PodSandboxId:f6d7bbca3f21c8665430018c644312239b044a01a1f2ebc0f9a44375c683ada6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091441916728734,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e12b21a98fb74bba8dd48a2456ee75b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e071aa0b0c8a06fb2fa644cf6450b7ba130899af2603d895c947e6291562ca,PodSandboxId:9eb1a6b8f20d4c1ad23b967067caf781c24cc1cf19f2c34b16c33f942c69ac21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091155834445359,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d1496da-8b2a-40e3-a253-3f1e78c7e804 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.063002222Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d522799-caae-417e-9a21-7c6a0cce9298 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.063372320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d522799-caae-417e-9a21-7c6a0cce9298 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.064629544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4577ce88-eedd-4874-b459-6bbc786918c2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.065188938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092328065164882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4577ce88-eedd-4874-b459-6bbc786918c2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.065739265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66d22d55-93c9-4180-bdf7-62e3316a448d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.066317473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66d22d55-93c9-4180-bdf7-62e3316a448d name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:08 no-preload-233969 crio[726]: time="2024-08-19 18:32:08.066799222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:07a784011c1630fa737c01f73a433114678f21908c6f206e3bb584867304b1c1,PodSandboxId:b4d5818be915b99aca7ccd2c37fef2ad0ccb06443ddada2cd1bd46dd3dc1de38,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091454193397783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a50087-cb20-407b-9a87-03d04d230afb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77567c11d5611c1a2c969b8a6fc34055e69a489fbc0107f5e8a20b37da5805c5,PodSandboxId:37767a2eba14bf0fcd760800859ba6d199f9b9e437a1ecb9a59ee974581f4f7d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453555346342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kdrzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0db6b602-ca09-40f4-9492-93cbbf919aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8561dfaa22d9dbfaad3595ea21ed0d67f5cf819327f2f363df9e3ea60b4dcc1c,PodSandboxId:c6ad35e9012be2dea475046d15ad05506df9b9212927b7e72670205b59323451,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091453465869004,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vb6dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d
39fac8-0d53-4380-a903-080414848e24,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fa5dfbb43c52ad39b74aed140c756006f0f11c9a69472d8b92addff0d64b08c,PodSandboxId:2aee12971ae2867607209d69cecb9ae7f33f29a54108669835a23500837ea191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724091452901276652,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt5nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd68c04-8a56-4a98-a1fb-21a194ebb5e3,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6e79f7543342fcc9df91a5421220a96a2eafaf71f5e76e68e905f2c8cea28e,PodSandboxId:a588363c1a4b3a4dc2693e3989d453399725b29e976bedbec900f1f7fddcd5fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091442066204690,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cfc5bf79fdfe549dd37044fd3c5166,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72417b0564131963fc9fb1346896cb455214e683357c704db318a8e7da63198,PodSandboxId:45067d098f0250c360e0a5c565093de48f05ccc6a2d5287faba18c94416613fd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091442031130486,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72358b243ceb044a02ff12f886e757,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6011dd9bf6ffffd9fe22a5c5037c57e099dba666794b9a288f314e36ab7dbd,PodSandboxId:5afb929a542e01a963ef67798b3ad1c0c49db77eb44f9304c9293a3f0e1298c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091441983131010,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:155f37c341f82292331395fcf8d5a6e110c77aefa45e4e23003310f74d179812,PodSandboxId:f6d7bbca3f21c8665430018c644312239b044a01a1f2ebc0f9a44375c683ada6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091441916728734,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e12b21a98fb74bba8dd48a2456ee75b1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e071aa0b0c8a06fb2fa644cf6450b7ba130899af2603d895c947e6291562ca,PodSandboxId:9eb1a6b8f20d4c1ad23b967067caf781c24cc1cf19f2c34b16c33f942c69ac21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091155834445359,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-233969,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2054ad2dac7b534476d987563ad3648,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66d22d55-93c9-4180-bdf7-62e3316a448d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	07a784011c163       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   b4d5818be915b       storage-provisioner
	77567c11d5611       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   37767a2eba14b       coredns-6f6b679f8f-kdrzp
	8561dfaa22d9d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   c6ad35e9012be       coredns-6f6b679f8f-vb6dx
	0fa5dfbb43c52       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   14 minutes ago      Running             kube-proxy                0                   2aee12971ae28       kube-proxy-pt5nj
	bf6e79f754334       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   a588363c1a4b3       etcd-no-preload-233969
	a72417b056413       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   14 minutes ago      Running             kube-controller-manager   2                   45067d098f025       kube-controller-manager-no-preload-233969
	7c6011dd9bf6f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Running             kube-apiserver            2                   5afb929a542e0       kube-apiserver-no-preload-233969
	155f37c341f82       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   14 minutes ago      Running             kube-scheduler            2                   f6d7bbca3f21c       kube-scheduler-no-preload-233969
	76e071aa0b0c8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 minutes ago      Exited              kube-apiserver            1                   9eb1a6b8f20d4       kube-apiserver-no-preload-233969
	
	
	==> coredns [77567c11d5611c1a2c969b8a6fc34055e69a489fbc0107f5e8a20b37da5805c5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [8561dfaa22d9dbfaad3595ea21ed0d67f5cf819327f2f363df9e3ea60b4dcc1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-233969
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-233969
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=no-preload-233969
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_17_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:17:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-233969
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:32:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:27:50 +0000   Mon, 19 Aug 2024 18:17:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:27:50 +0000   Mon, 19 Aug 2024 18:17:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:27:50 +0000   Mon, 19 Aug 2024 18:17:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:27:50 +0000   Mon, 19 Aug 2024 18:17:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.8
	  Hostname:    no-preload-233969
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef4ac605df354c2fb51fb515363583c1
	  System UUID:                ef4ac605-df35-4c2f-b51f-b515363583c1
	  Boot ID:                    4f188a38-911b-4def-8f27-e5504e459084
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-kdrzp                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-vb6dx                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-233969                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-233969             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-233969    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-pt5nj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-233969             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-bfkkf              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-233969 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-233969 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-233969 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-233969 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-233969 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-233969 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-233969 event: Registered Node no-preload-233969 in Controller
	
	
	==> dmesg <==
	[  +0.039051] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.009642] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.833961] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529430] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000035] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.317366] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.060791] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056977] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.181447] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.139739] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.272848] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +15.648356] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.057830] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.314347] systemd-fstab-generator[1419]: Ignoring "noauto" option for root device
	[  +3.276875] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.124171] kauditd_printk_skb: 55 callbacks suppressed
	[Aug19 18:13] kauditd_printk_skb: 30 callbacks suppressed
	[Aug19 18:17] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.003493] systemd-fstab-generator[3074]: Ignoring "noauto" option for root device
	[  +4.479238] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.577101] systemd-fstab-generator[3396]: Ignoring "noauto" option for root device
	[  +5.320862] systemd-fstab-generator[3527]: Ignoring "noauto" option for root device
	[  +0.125050] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.624852] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [bf6e79f7543342fcc9df91a5421220a96a2eafaf71f5e76e68e905f2c8cea28e] <==
	{"level":"info","ts":"2024-08-19T18:17:23.293993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T18:17:23.294082Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-08-19T18:21:02.358465Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":18041298524702058265,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-19T18:21:02.796070Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"515.427003ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.796273Z","caller":"traceutil/trace.go:171","msg":"trace[1995537411] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:660; }","duration":"515.671401ms","start":"2024-08-19T18:21:02.280569Z","end":"2024-08-19T18:21:02.796241Z","steps":["trace[1995537411] 'range keys from in-memory index tree'  (duration: 515.415437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:21:02.796356Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"939.957838ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18041298524702058266 > lease_revoke:<id:7a5f916bdba526ba>","response":"size:29"}
	{"level":"info","ts":"2024-08-19T18:21:02.796937Z","caller":"traceutil/trace.go:171","msg":"trace[987301924] linearizableReadLoop","detail":"{readStateIndex:715; appliedIndex:713; }","duration":"939.382421ms","start":"2024-08-19T18:21:01.857542Z","end":"2024-08-19T18:21:02.796924Z","steps":["trace[987301924] 'read index received'  (duration: 938.373694ms)","trace[987301924] 'applied index is now lower than readState.Index'  (duration: 1.007575ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:21:02.797221Z","caller":"traceutil/trace.go:171","msg":"trace[1702930991] transaction","detail":"{read_only:false; response_revision:661; number_of_response:1; }","duration":"979.150088ms","start":"2024-08-19T18:21:01.818055Z","end":"2024-08-19T18:21:02.797205Z","steps":["trace[1702930991] 'process raft request'  (duration: 978.372082ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:21:02.798043Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:21:01.818028Z","time spent":"979.241984ms","remote":"127.0.0.1:44994","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-233969\" mod_revision:652 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-233969\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-233969\" > >"}
	{"level":"warn","ts":"2024-08-19T18:21:02.976828Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.119267835s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.977061Z","caller":"traceutil/trace.go:171","msg":"trace[764819430] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:661; }","duration":"1.119503489s","start":"2024-08-19T18:21:01.857536Z","end":"2024-08-19T18:21:02.977039Z","steps":["trace[764819430] 'agreement among raft nodes before linearized reading'  (duration: 940.706645ms)","trace[764819430] 'range keys from in-memory index tree'  (duration: 178.539178ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:21:02.977159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:21:01.857495Z","time spent":"1.119645305s","remote":"127.0.0.1:44724","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-19T18:21:02.977362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"919.294159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.977435Z","caller":"traceutil/trace.go:171","msg":"trace[570947311] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:661; }","duration":"919.369093ms","start":"2024-08-19T18:21:02.058053Z","end":"2024-08-19T18:21:02.977422Z","steps":["trace[570947311] 'agreement among raft nodes before linearized reading'  (duration: 740.210413ms)","trace[570947311] 'range keys from in-memory index tree'  (duration: 179.072273ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:21:02.977849Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:21:02.058018Z","time spent":"919.812793ms","remote":"127.0.0.1:44904","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-19T18:21:02.978123Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.761586ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-19T18:21:02.978186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.381117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-19T18:21:02.978236Z","caller":"traceutil/trace.go:171","msg":"trace[146728713] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:661; }","duration":"245.434228ms","start":"2024-08-19T18:21:02.732793Z","end":"2024-08-19T18:21:02.978227Z","steps":["trace[146728713] 'agreement among raft nodes before linearized reading'  (duration: 65.514785ms)","trace[146728713] 'count revisions from in-memory index tree'  (duration: 179.856377ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:21:02.978416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"810.217529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:21:02.978460Z","caller":"traceutil/trace.go:171","msg":"trace[353660773] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:661; }","duration":"810.26216ms","start":"2024-08-19T18:21:02.168188Z","end":"2024-08-19T18:21:02.978450Z","steps":["trace[353660773] 'agreement among raft nodes before linearized reading'  (duration: 630.126127ms)","trace[353660773] 'range keys from in-memory index tree'  (duration: 180.035394ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T18:21:02.978488Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:21:02.168144Z","time spent":"810.336666ms","remote":"127.0.0.1:44730","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-08-19T18:21:02.978191Z","caller":"traceutil/trace.go:171","msg":"trace[1719472852] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:661; }","duration":"181.832427ms","start":"2024-08-19T18:21:02.796349Z","end":"2024-08-19T18:21:02.978181Z","steps":["trace[1719472852] 'range keys from in-memory index tree'  (duration: 179.801051ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:27:23.327215Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":726}
	{"level":"info","ts":"2024-08-19T18:27:23.336717Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":726,"took":"9.067046ms","hash":4024823853,"current-db-size-bytes":2375680,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2375680,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-08-19T18:27:23.336782Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4024823853,"revision":726,"compact-revision":-1}
	
	
	==> kernel <==
	 18:32:08 up 20 min,  0 users,  load average: 0.09, 0.16, 0.17
	Linux no-preload-233969 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [76e071aa0b0c8a06fb2fa644cf6450b7ba130899af2603d895c947e6291562ca] <==
	W0819 18:17:15.676636       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.696779       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.758330       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.771074       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.812085       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.823744       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.858103       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.878549       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.903737       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.905221       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.922456       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:15.967764       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.069774       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.172736       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.191563       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.197109       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.202569       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.217211       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.221560       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.284725       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.594860       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.679591       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.788770       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:16.976851       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:17:17.089894       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7c6011dd9bf6ffffd9fe22a5c5037c57e099dba666794b9a288f314e36ab7dbd] <==
	W0819 18:27:25.665065       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:27:25.665192       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 18:27:25.666333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:27:25.666371       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:28:25.667512       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:28:25.667586       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 18:28:25.667654       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:28:25.667680       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 18:28:25.668821       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:28:25.668859       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:30:25.669578       1 handler_proxy.go:99] no RequestInfo found in the context
	W0819 18:30:25.669578       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:30:25.670060       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0819 18:30:25.670125       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0819 18:30:25.671276       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:30:25.671362       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a72417b0564131963fc9fb1346896cb455214e683357c704db318a8e7da63198] <==
	E0819 18:27:01.739022       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:27:02.201904       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:27:31.745249       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:27:32.210577       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:27:50.546201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-233969"
	E0819 18:28:01.751367       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:28:02.218758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:28:31.758929       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:28:32.234683       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:28:46.236747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="197.819µs"
	I0819 18:28:59.238174       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="52.935µs"
	E0819 18:29:01.765102       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:29:02.243609       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:29:31.771562       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:29:32.252304       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:30:01.778500       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:30:02.262608       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:30:31.787425       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:30:32.272607       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:31:01.794011       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:31:02.281877       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:31:31.801044       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:31:32.289526       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:32:01.807157       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:32:02.297854       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0fa5dfbb43c52ad39b74aed140c756006f0f11c9a69472d8b92addff0d64b08c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:17:33.480501       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:17:33.546623       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.8"]
	E0819 18:17:33.546716       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:17:33.912797       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:17:33.912901       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:17:33.912988       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:17:33.915176       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:17:33.915452       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:17:33.915486       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:17:33.920472       1 config.go:197] "Starting service config controller"
	I0819 18:17:33.920605       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:17:33.920649       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:17:33.920665       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:17:33.921151       1 config.go:326] "Starting node config controller"
	I0819 18:17:33.921197       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:17:34.020823       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:17:34.020853       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:17:34.021486       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [155f37c341f82292331395fcf8d5a6e110c77aefa45e4e23003310f74d179812] <==
	W0819 18:17:24.698639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:17:24.698763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:24.698888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:17:24.698935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.593232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:17:25.593287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.606138       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 18:17:25.606170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.610633       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 18:17:25.610705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.663813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:17:25.664034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.695643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:17:25.695700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.706305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:17:25.706478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.755170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:17:25.755324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.858020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:17:25.858482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:25.894545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 18:17:25.894712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:17:26.169314       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 18:17:26.169371       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 18:17:28.585641       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:30:57 no-preload-233969 kubelet[3402]: E0819 18:30:57.423182    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092257422852703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:07 no-preload-233969 kubelet[3402]: E0819 18:31:07.424268    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092267424057899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:07 no-preload-233969 kubelet[3402]: E0819 18:31:07.424294    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092267424057899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:11 no-preload-233969 kubelet[3402]: E0819 18:31:11.221704    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	Aug 19 18:31:17 no-preload-233969 kubelet[3402]: E0819 18:31:17.426617    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092277425478088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:17 no-preload-233969 kubelet[3402]: E0819 18:31:17.426678    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092277425478088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:22 no-preload-233969 kubelet[3402]: E0819 18:31:22.221820    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	Aug 19 18:31:27 no-preload-233969 kubelet[3402]: E0819 18:31:27.246169    3402 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:31:27 no-preload-233969 kubelet[3402]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:31:27 no-preload-233969 kubelet[3402]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:31:27 no-preload-233969 kubelet[3402]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:31:27 no-preload-233969 kubelet[3402]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:31:27 no-preload-233969 kubelet[3402]: E0819 18:31:27.427855    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092287427465121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:27 no-preload-233969 kubelet[3402]: E0819 18:31:27.427892    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092287427465121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:35 no-preload-233969 kubelet[3402]: E0819 18:31:35.223020    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	Aug 19 18:31:37 no-preload-233969 kubelet[3402]: E0819 18:31:37.429823    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092297429471979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:37 no-preload-233969 kubelet[3402]: E0819 18:31:37.429856    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092297429471979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:47 no-preload-233969 kubelet[3402]: E0819 18:31:47.431316    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092307430835978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:47 no-preload-233969 kubelet[3402]: E0819 18:31:47.431811    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092307430835978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:50 no-preload-233969 kubelet[3402]: E0819 18:31:50.221348    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	Aug 19 18:31:57 no-preload-233969 kubelet[3402]: E0819 18:31:57.433821    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092317432919610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:31:57 no-preload-233969 kubelet[3402]: E0819 18:31:57.433899    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092317432919610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:32:04 no-preload-233969 kubelet[3402]: E0819 18:32:04.222283    3402 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bfkkf" podUID="00206622-fe4f-4f26-8f69-ac7fb6a39805"
	Aug 19 18:32:07 no-preload-233969 kubelet[3402]: E0819 18:32:07.435412    3402 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092327435048927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:32:07 no-preload-233969 kubelet[3402]: E0819 18:32:07.435454    3402 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092327435048927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [07a784011c1630fa737c01f73a433114678f21908c6f206e3bb584867304b1c1] <==
	I0819 18:17:34.321868       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:17:34.359710       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:17:34.359799       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:17:34.389186       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:17:34.389428       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-233969_392a54d5-4efb-479b-93c2-958a02d43a17!
	I0819 18:17:34.391085       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afc45dcb-0808-4080-8cf1-3a1b697f30bb", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-233969_392a54d5-4efb-479b-93c2-958a02d43a17 became leader
	I0819 18:17:34.491466       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-233969_392a54d5-4efb-479b-93c2-958a02d43a17!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-233969 -n no-preload-233969
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-233969 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-bfkkf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-233969 describe pod metrics-server-6867b74b74-bfkkf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-233969 describe pod metrics-server-6867b74b74-bfkkf: exit status 1 (67.405882ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-bfkkf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-233969 describe pod metrics-server-6867b74b74-bfkkf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (323.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (174.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
E0819 18:30:21.262929   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.246:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.246:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079123 -n old-k8s-version-079123
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 2 (225.483215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-079123" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-079123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-079123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.563µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-079123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 2 (218.119416ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-079123 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-975771                              | cert-expiration-975771       | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:05 UTC |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:05 UTC | 19 Aug 24 18:06 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-233969                  | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-233969                                   | no-preload-233969            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233045             | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-079123        | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233045                  | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233045 --memory=2200 --alsologtostderr   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-813424       | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-813424 | jenkins | v1.33.1 | 19 Aug 24 18:06 UTC | 19 Aug 24 18:16 UTC |
	|         | default-k8s-diff-port-813424                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-079123             | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC | 19 Aug 24 18:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-079123                              | old-k8s-version-079123       | jenkins | v1.33.1 | 19 Aug 24 18:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-233045 image list                           | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p newest-cni-233045                                   | newest-cni-233045            | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	| delete  | -p                                                     | disable-driver-mounts-814719 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | disable-driver-mounts-814719                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-306581            | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC | 19 Aug 24 18:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-306581                 | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-306581                                  | embed-certs-306581           | jenkins | v1.33.1 | 19 Aug 24 18:15 UTC | 19 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:15:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:15:52.756356   66229 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:15:52.756664   66229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:15:52.756675   66229 out.go:358] Setting ErrFile to fd 2...
	I0819 18:15:52.756680   66229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:15:52.756881   66229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:15:52.757409   66229 out.go:352] Setting JSON to false
	I0819 18:15:52.758366   66229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7098,"bootTime":1724084255,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:15:52.758430   66229 start.go:139] virtualization: kvm guest
	I0819 18:15:52.760977   66229 out.go:177] * [embed-certs-306581] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:15:52.762479   66229 notify.go:220] Checking for updates...
	I0819 18:15:52.762504   66229 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:15:52.763952   66229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:15:52.765453   66229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:15:52.766810   66229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:15:52.768135   66229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:15:52.769369   66229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:15:52.771017   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:52.771443   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.771504   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.786463   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0819 18:15:52.786925   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.787501   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.787523   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.787800   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.787975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.788239   66229 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:15:52.788527   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.788562   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.803703   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0819 18:15:52.804145   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.804609   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.804625   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.804962   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.805142   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.842707   66229 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 18:15:52.844070   66229 start.go:297] selected driver: kvm2
	I0819 18:15:52.844092   66229 start.go:901] validating driver "kvm2" against &{Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:52.844258   66229 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:15:52.844998   66229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:52.845085   66229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:15:52.860606   66229 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:15:52.861678   66229 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:15:52.861730   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:15:52.861742   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:15:52.861793   66229 start.go:340] cluster config:
	{Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:52.862003   66229 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:52.864173   66229 out.go:177] * Starting "embed-certs-306581" primary control-plane node in "embed-certs-306581" cluster
	I0819 18:15:52.865772   66229 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:15:52.865819   66229 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:15:52.865827   66229 cache.go:56] Caching tarball of preloaded images
	I0819 18:15:52.865902   66229 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:15:52.865913   66229 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:15:52.866012   66229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/config.json ...
	I0819 18:15:52.866250   66229 start.go:360] acquireMachinesLock for embed-certs-306581: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:15:52.866299   66229 start.go:364] duration metric: took 26.7µs to acquireMachinesLock for "embed-certs-306581"
	I0819 18:15:52.866311   66229 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:15:52.866316   66229 fix.go:54] fixHost starting: 
	I0819 18:15:52.866636   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:15:52.866671   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:15:52.883154   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0819 18:15:52.883648   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:15:52.884149   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:15:52.884170   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:15:52.884509   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:15:52.884710   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.884888   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:15:52.886632   66229 fix.go:112] recreateIfNeeded on embed-certs-306581: state=Running err=<nil>
	W0819 18:15:52.886653   66229 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:15:52.888856   66229 out.go:177] * Updating the running kvm2 "embed-certs-306581" VM ...
	I0819 18:15:50.375775   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:52.376597   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:50.455083   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:50.467702   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:50.467768   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:50.517276   63216 cri.go:89] found id: ""
	I0819 18:15:50.517306   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.517315   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:50.517323   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:50.517399   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:50.550878   63216 cri.go:89] found id: ""
	I0819 18:15:50.550905   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.550914   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:50.550921   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:50.550984   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:50.583515   63216 cri.go:89] found id: ""
	I0819 18:15:50.583543   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.583553   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:50.583560   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:50.583622   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:50.618265   63216 cri.go:89] found id: ""
	I0819 18:15:50.618291   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.618299   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:50.618304   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:50.618362   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:50.653436   63216 cri.go:89] found id: ""
	I0819 18:15:50.653461   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.653469   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:50.653476   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:50.653534   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:50.687715   63216 cri.go:89] found id: ""
	I0819 18:15:50.687745   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.687757   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:50.687764   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:50.687885   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:50.721235   63216 cri.go:89] found id: ""
	I0819 18:15:50.721262   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.721272   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:50.721280   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:50.721328   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:50.754095   63216 cri.go:89] found id: ""
	I0819 18:15:50.754126   63216 logs.go:276] 0 containers: []
	W0819 18:15:50.754134   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:50.754143   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:50.754156   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:50.805661   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:50.805698   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:50.819495   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:50.819536   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:50.887296   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:50.887317   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:50.887334   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:50.966224   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:50.966261   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.508007   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:53.520812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:53.520870   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:53.552790   63216 cri.go:89] found id: ""
	I0819 18:15:53.552816   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.552823   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:53.552829   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:53.552873   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:53.585937   63216 cri.go:89] found id: ""
	I0819 18:15:53.585969   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.585978   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:53.585986   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:53.586057   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:53.618890   63216 cri.go:89] found id: ""
	I0819 18:15:53.618915   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.618922   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:53.618928   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:53.618975   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:53.650045   63216 cri.go:89] found id: ""
	I0819 18:15:53.650069   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.650076   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:53.650082   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:53.650138   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:53.685069   63216 cri.go:89] found id: ""
	I0819 18:15:53.685097   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.685106   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:53.685113   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:53.685179   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:53.717742   63216 cri.go:89] found id: ""
	I0819 18:15:53.717771   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.717778   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:53.717784   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:53.717832   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:53.747768   63216 cri.go:89] found id: ""
	I0819 18:15:53.747798   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.747806   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:53.747812   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:53.747858   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:53.779973   63216 cri.go:89] found id: ""
	I0819 18:15:53.779999   63216 logs.go:276] 0 containers: []
	W0819 18:15:53.780006   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:53.780016   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:53.780027   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:53.815619   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:53.815656   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:53.866767   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:53.866802   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:53.879693   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:53.879721   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:53.947610   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:53.947640   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:53.947659   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:52.172237   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:54.172434   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:52.890101   66229 machine.go:93] provisionDockerMachine start ...
	I0819 18:15:52.890131   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:15:52.890374   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:15:52.892900   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:15:52.893405   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:12:30 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:15:52.893431   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:15:52.893613   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:15:52.893796   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:15:52.893979   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:15:52.894149   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:15:52.894328   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:52.894580   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:15:52.894597   66229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:15:55.789130   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:15:54.376799   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:56.884787   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:56.524639   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:56.537312   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:56.537395   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:56.569913   63216 cri.go:89] found id: ""
	I0819 18:15:56.569958   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.569965   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:56.569972   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:56.570031   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:56.602119   63216 cri.go:89] found id: ""
	I0819 18:15:56.602145   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.602152   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:56.602158   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:56.602211   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:56.634864   63216 cri.go:89] found id: ""
	I0819 18:15:56.634900   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.634910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:56.634920   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:56.634982   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:56.667099   63216 cri.go:89] found id: ""
	I0819 18:15:56.667127   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.667136   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:56.667145   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:56.667194   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:56.703539   63216 cri.go:89] found id: ""
	I0819 18:15:56.703562   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.703571   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:56.703576   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:56.703637   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.734668   63216 cri.go:89] found id: ""
	I0819 18:15:56.734691   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.734698   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:56.734703   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:56.734747   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:56.768840   63216 cri.go:89] found id: ""
	I0819 18:15:56.768866   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.768874   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:56.768880   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:56.768925   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:56.800337   63216 cri.go:89] found id: ""
	I0819 18:15:56.800366   63216 logs.go:276] 0 containers: []
	W0819 18:15:56.800375   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:56.800384   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:56.800398   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:56.866036   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:56.866060   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:56.866072   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:15:56.955372   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:15:56.955414   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:15:57.004450   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:15:57.004477   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:15:57.057284   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:57.057320   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.570450   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:59.583640   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:15:59.583729   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:15:59.617911   63216 cri.go:89] found id: ""
	I0819 18:15:59.617943   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.617954   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:15:59.617963   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:15:59.618014   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:15:59.650239   63216 cri.go:89] found id: ""
	I0819 18:15:59.650265   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.650274   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:15:59.650279   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:15:59.650329   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:15:59.684877   63216 cri.go:89] found id: ""
	I0819 18:15:59.684902   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.684910   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:15:59.684916   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:15:59.684977   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:15:59.717378   63216 cri.go:89] found id: ""
	I0819 18:15:59.717402   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.717414   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:15:59.717428   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:15:59.717484   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:15:59.748937   63216 cri.go:89] found id: ""
	I0819 18:15:59.748968   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.748980   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:15:59.748989   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:15:59.749058   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:15:56.672222   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:59.171375   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:58.861002   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:15:59.375951   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:01.376193   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:03.376512   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:15:59.781784   63216 cri.go:89] found id: ""
	I0819 18:15:59.781819   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.781830   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:15:59.781837   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:15:59.781899   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:15:59.815593   63216 cri.go:89] found id: ""
	I0819 18:15:59.815626   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.815637   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:15:59.815645   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:15:59.815709   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:15:59.847540   63216 cri.go:89] found id: ""
	I0819 18:15:59.847571   63216 logs.go:276] 0 containers: []
	W0819 18:15:59.847581   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:15:59.847595   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:15:59.847609   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:15:59.860256   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:15:59.860292   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:15:59.931873   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:15:59.931900   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:15:59.931915   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:00.011897   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:00.011938   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:00.047600   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:00.047628   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.599457   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:02.617040   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:02.617112   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:02.658148   63216 cri.go:89] found id: ""
	I0819 18:16:02.658173   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.658181   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:02.658187   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:02.658256   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:02.711833   63216 cri.go:89] found id: ""
	I0819 18:16:02.711873   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.711882   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:02.711889   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:02.711945   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:02.746611   63216 cri.go:89] found id: ""
	I0819 18:16:02.746644   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.746652   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:02.746658   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:02.746712   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:02.781731   63216 cri.go:89] found id: ""
	I0819 18:16:02.781757   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.781764   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:02.781771   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:02.781827   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:02.814215   63216 cri.go:89] found id: ""
	I0819 18:16:02.814242   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.814253   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:02.814260   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:02.814320   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:02.848767   63216 cri.go:89] found id: ""
	I0819 18:16:02.848804   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.848815   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:02.848823   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:02.848881   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:02.882890   63216 cri.go:89] found id: ""
	I0819 18:16:02.882913   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.882920   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:02.882927   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:02.882983   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:02.918333   63216 cri.go:89] found id: ""
	I0819 18:16:02.918362   63216 logs.go:276] 0 containers: []
	W0819 18:16:02.918370   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:02.918393   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:02.918405   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:02.966994   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:02.967024   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:02.980377   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:02.980437   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:03.045097   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:03.045127   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:03.045145   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:03.126682   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:03.126727   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:01.671492   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:04.171471   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:04.941029   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:05.376677   62749 pod_ready.go:103] pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:05.376705   62749 pod_ready.go:82] duration metric: took 4m0.006404877s for pod "metrics-server-6867b74b74-tp742" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:05.376714   62749 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 18:16:05.376720   62749 pod_ready.go:39] duration metric: took 4m6.335802515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:05.376735   62749 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:16:05.376775   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.376822   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.419678   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:05.419719   62749 cri.go:89] found id: ""
	I0819 18:16:05.419728   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:05.419801   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.424210   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.424271   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.459501   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:05.459527   62749 cri.go:89] found id: ""
	I0819 18:16:05.459535   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:05.459578   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.463654   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.463711   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.497591   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:05.497613   62749 cri.go:89] found id: ""
	I0819 18:16:05.497620   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:05.497667   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.501207   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.501274   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.535112   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:05.535141   62749 cri.go:89] found id: ""
	I0819 18:16:05.535150   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:05.535215   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.538855   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.538909   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.573744   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:05.573769   62749 cri.go:89] found id: ""
	I0819 18:16:05.573776   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:05.573824   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.577981   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.578045   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.616545   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:05.616569   62749 cri.go:89] found id: ""
	I0819 18:16:05.616577   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:05.616630   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.620549   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.620597   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.662743   62749 cri.go:89] found id: ""
	I0819 18:16:05.662781   62749 logs.go:276] 0 containers: []
	W0819 18:16:05.662792   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.662800   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:05.662855   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:05.711433   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:05.711456   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:05.711463   62749 cri.go:89] found id: ""
	I0819 18:16:05.711472   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:05.711536   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.716476   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:05.720240   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:05.720261   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.261474   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:06.261523   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:06.384895   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:06.384927   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:06.421665   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:06.421700   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:06.461866   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:06.461900   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:06.496543   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:06.496570   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:06.551478   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:06.551518   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:06.586858   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.586886   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.625272   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.625300   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:06.697922   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:06.697960   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:06.711624   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:06.711658   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:06.752648   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:06.752677   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:06.796805   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:06.796836   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:05.662843   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:05.680724   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:05.680811   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:05.719205   63216 cri.go:89] found id: ""
	I0819 18:16:05.719227   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.719234   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:16:05.719240   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:05.719283   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:05.764548   63216 cri.go:89] found id: ""
	I0819 18:16:05.764577   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.764587   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:16:05.764593   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:05.764644   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:05.800478   63216 cri.go:89] found id: ""
	I0819 18:16:05.800503   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.800521   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:16:05.800527   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:05.800582   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:05.837403   63216 cri.go:89] found id: ""
	I0819 18:16:05.837432   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.837443   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:16:05.837450   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:05.837506   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:05.869330   63216 cri.go:89] found id: ""
	I0819 18:16:05.869357   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.869367   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:16:05.869375   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:05.869463   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:05.900354   63216 cri.go:89] found id: ""
	I0819 18:16:05.900382   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.900393   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:16:05.900401   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:05.900457   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:05.933899   63216 cri.go:89] found id: ""
	I0819 18:16:05.933926   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.933937   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:05.933944   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:16:05.934003   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:16:05.968393   63216 cri.go:89] found id: ""
	I0819 18:16:05.968421   63216 logs.go:276] 0 containers: []
	W0819 18:16:05.968430   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:16:05.968441   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:05.968458   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:05.980957   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:05.980988   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:16:06.045310   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:16:06.045359   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:06.045375   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:06.124351   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:16:06.124389   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:06.168102   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:06.168130   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:08.718499   63216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:08.731535   63216 kubeadm.go:597] duration metric: took 4m4.252819836s to restartPrimaryControlPlane
	W0819 18:16:08.731622   63216 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:08.731651   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:06.172578   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:08.671110   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:08.013019   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:09.338729   62749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:16:09.355014   62749 api_server.go:72] duration metric: took 4m18.036977131s to wait for apiserver process to appear ...
	I0819 18:16:09.355046   62749 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:16:09.355086   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:09.355148   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:09.390088   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:09.390107   62749 cri.go:89] found id: ""
	I0819 18:16:09.390115   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:09.390161   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.393972   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:09.394024   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:09.426919   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:09.426943   62749 cri.go:89] found id: ""
	I0819 18:16:09.426953   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:09.427007   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.430685   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:09.430755   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:09.465843   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:09.465867   62749 cri.go:89] found id: ""
	I0819 18:16:09.465876   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:09.465936   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.469990   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:09.470057   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:09.503690   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:09.503716   62749 cri.go:89] found id: ""
	I0819 18:16:09.503727   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:09.503789   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.507731   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:09.507791   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:09.541067   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:09.541098   62749 cri.go:89] found id: ""
	I0819 18:16:09.541108   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:09.541169   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.546503   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:09.546568   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:09.587861   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:09.587888   62749 cri.go:89] found id: ""
	I0819 18:16:09.587898   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:09.587960   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.593765   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:09.593831   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:09.628426   62749 cri.go:89] found id: ""
	I0819 18:16:09.628456   62749 logs.go:276] 0 containers: []
	W0819 18:16:09.628464   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:09.628470   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:09.628529   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:09.666596   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:09.666622   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:09.666628   62749 cri.go:89] found id: ""
	I0819 18:16:09.666636   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:09.666688   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.670929   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:09.674840   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:09.674863   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:09.708286   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:09.708313   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:09.739212   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:09.739234   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:10.171487   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:10.171535   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:10.208985   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:10.209025   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:10.222001   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:10.222028   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:10.267193   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:10.267225   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:10.300082   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:10.300110   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:10.333403   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:10.333434   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:10.371961   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:10.371989   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:10.425550   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:10.425586   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:10.500742   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:10.500796   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:10.602484   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:10.602518   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:13.149769   62749 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8444/healthz ...
	I0819 18:16:13.154238   62749 api_server.go:279] https://192.168.61.243:8444/healthz returned 200:
	ok
	I0819 18:16:13.155139   62749 api_server.go:141] control plane version: v1.31.0
	I0819 18:16:13.155154   62749 api_server.go:131] duration metric: took 3.800101993s to wait for apiserver health ...
	I0819 18:16:13.155161   62749 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:16:13.155180   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:13.155232   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:13.194723   62749 cri.go:89] found id: "d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:13.194749   62749 cri.go:89] found id: ""
	I0819 18:16:13.194759   62749 logs.go:276] 1 containers: [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784]
	I0819 18:16:13.194811   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.198645   62749 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:13.198703   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:13.236332   62749 cri.go:89] found id: "8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:13.236405   62749 cri.go:89] found id: ""
	I0819 18:16:13.236418   62749 logs.go:276] 1 containers: [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117]
	I0819 18:16:13.236473   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.240682   62749 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:13.240764   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:13.277257   62749 cri.go:89] found id: "85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:13.277283   62749 cri.go:89] found id: ""
	I0819 18:16:13.277290   62749 logs.go:276] 1 containers: [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355]
	I0819 18:16:13.277339   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.281458   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:13.281516   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:13.319419   62749 cri.go:89] found id: "93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:13.319444   62749 cri.go:89] found id: ""
	I0819 18:16:13.319453   62749 logs.go:276] 1 containers: [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2]
	I0819 18:16:13.319508   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.323377   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:13.323444   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:13.357320   62749 cri.go:89] found id: "eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:13.357344   62749 cri.go:89] found id: ""
	I0819 18:16:13.357353   62749 logs.go:276] 1 containers: [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0]
	I0819 18:16:13.357417   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.361505   62749 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:13.361582   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:13.396379   62749 cri.go:89] found id: "faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:13.396396   62749 cri.go:89] found id: ""
	I0819 18:16:13.396403   62749 logs.go:276] 1 containers: [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb]
	I0819 18:16:13.396457   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.400372   62749 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:13.400442   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:13.433520   62749 cri.go:89] found id: ""
	I0819 18:16:13.433551   62749 logs.go:276] 0 containers: []
	W0819 18:16:13.433561   62749 logs.go:278] No container was found matching "kindnet"
	I0819 18:16:13.433569   62749 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0819 18:16:13.433629   62749 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 18:16:13.467382   62749 cri.go:89] found id: "c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:13.467411   62749 cri.go:89] found id: "cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:13.467418   62749 cri.go:89] found id: ""
	I0819 18:16:13.467427   62749 logs.go:276] 2 containers: [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a]
	I0819 18:16:13.467486   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.471371   62749 ssh_runner.go:195] Run: which crictl
	I0819 18:16:13.474905   62749 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:13.474924   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:13.547564   62749 logs.go:123] Gathering logs for kube-apiserver [d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784] ...
	I0819 18:16:13.547596   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5fff05f93c77da4d58a34a57f87d62868be18076414860ed0e3020492b42784"
	I0819 18:16:13.593702   62749 logs.go:123] Gathering logs for kube-scheduler [93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2] ...
	I0819 18:16:13.593731   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93344a9847519b1c1f1b2434c49c06dd3c357f6f0cb84f5a5d1fb1a6cf9deaa2"
	I0819 18:16:13.629610   62749 logs.go:123] Gathering logs for kube-proxy [eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0] ...
	I0819 18:16:13.629634   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb30ed4fd51a8ce2d956d8d1502cd882f00ac457cbfa02ee6e2d0cba209fb9d0"
	I0819 18:16:13.669337   62749 logs.go:123] Gathering logs for kube-controller-manager [faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb] ...
	I0819 18:16:13.669372   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 faf8db92753dd6ae5673f19d104ebb0f3d2cd8837d81695fcee53f34ff0402bb"
	I0819 18:16:13.729986   62749 logs.go:123] Gathering logs for storage-provisioner [c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe] ...
	I0819 18:16:13.730012   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c836b0235de709ab18335bb06649c563fb0477a46f04f754435ca1984f905dfe"
	I0819 18:16:13.766424   62749 logs.go:123] Gathering logs for storage-provisioner [cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a] ...
	I0819 18:16:13.766459   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cef2e9a618dd49d9ddd7582a6ec62f6e7c6a7108341b5697f655e0ca4773804a"
	I0819 18:16:13.806677   62749 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:13.806702   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:13.540438   63216 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.808760826s)
	I0819 18:16:13.540508   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:13.555141   63216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:16:13.565159   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:16:13.575671   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:16:13.575689   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:16:13.575743   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:16:13.586181   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:16:13.586388   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:16:13.597239   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:16:13.606788   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:16:13.606857   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:16:13.616964   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.627128   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:16:13.627195   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:16:13.637263   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:16:13.646834   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:16:13.646898   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:16:13.657566   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:16:13.887585   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:16:11.171886   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:13.672521   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:14.199046   62749 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:14.199103   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:14.213508   62749 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:14.213537   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:14.341980   62749 logs.go:123] Gathering logs for etcd [8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117] ...
	I0819 18:16:14.342017   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8832533edf13ef1304fbfca93886ffa0c011820e4170d25383fd28463850f117"
	I0819 18:16:14.389817   62749 logs.go:123] Gathering logs for coredns [85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355] ...
	I0819 18:16:14.389853   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85dd74b0050d1e7ccbfa76d1fd0c2147c4cfc009ecb8f6a5c133941fc2479355"
	I0819 18:16:14.425890   62749 logs.go:123] Gathering logs for container status ...
	I0819 18:16:14.425928   62749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:16.991182   62749 system_pods.go:59] 8 kube-system pods found
	I0819 18:16:16.991211   62749 system_pods.go:61] "coredns-6f6b679f8f-4jvnz" [d81201db-1102-436b-ac29-dd201584de2d] Running
	I0819 18:16:16.991217   62749 system_pods.go:61] "etcd-default-k8s-diff-port-813424" [c9b60af5-479e-40b8-af56-2b1daef22843] Running
	I0819 18:16:16.991221   62749 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-813424" [b2f825b3-375c-46fb-9714-b0ed0be6ea51] Running
	I0819 18:16:16.991225   62749 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-813424" [c691a1e5-c01d-4bb1-8e37-c542524f9544] Running
	I0819 18:16:16.991229   62749 system_pods.go:61] "kube-proxy-j4x48" [886f5fe5-070e-419c-a9bb-5b95f7496717] Running
	I0819 18:16:16.991232   62749 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-813424" [b7bac76d-1783-40f1-adb2-75174ec8486e] Running
	I0819 18:16:16.991239   62749 system_pods.go:61] "metrics-server-6867b74b74-tp742" [aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:16:16.991243   62749 system_pods.go:61] "storage-provisioner" [658a37e1-39b6-4fa9-8f23-71518ebda8dc] Running
	I0819 18:16:16.991250   62749 system_pods.go:74] duration metric: took 3.836084784s to wait for pod list to return data ...
	I0819 18:16:16.991257   62749 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:16:16.993181   62749 default_sa.go:45] found service account: "default"
	I0819 18:16:16.993201   62749 default_sa.go:55] duration metric: took 1.93729ms for default service account to be created ...
	I0819 18:16:16.993208   62749 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:16:16.997803   62749 system_pods.go:86] 8 kube-system pods found
	I0819 18:16:16.997825   62749 system_pods.go:89] "coredns-6f6b679f8f-4jvnz" [d81201db-1102-436b-ac29-dd201584de2d] Running
	I0819 18:16:16.997830   62749 system_pods.go:89] "etcd-default-k8s-diff-port-813424" [c9b60af5-479e-40b8-af56-2b1daef22843] Running
	I0819 18:16:16.997835   62749 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-813424" [b2f825b3-375c-46fb-9714-b0ed0be6ea51] Running
	I0819 18:16:16.997840   62749 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-813424" [c691a1e5-c01d-4bb1-8e37-c542524f9544] Running
	I0819 18:16:16.997844   62749 system_pods.go:89] "kube-proxy-j4x48" [886f5fe5-070e-419c-a9bb-5b95f7496717] Running
	I0819 18:16:16.997848   62749 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-813424" [b7bac76d-1783-40f1-adb2-75174ec8486e] Running
	I0819 18:16:16.997854   62749 system_pods.go:89] "metrics-server-6867b74b74-tp742" [aacd7eb1-475f-4d6a-9dad-ac9ef67fc5fb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:16:16.997861   62749 system_pods.go:89] "storage-provisioner" [658a37e1-39b6-4fa9-8f23-71518ebda8dc] Running
	I0819 18:16:16.997868   62749 system_pods.go:126] duration metric: took 4.655661ms to wait for k8s-apps to be running ...
	I0819 18:16:16.997877   62749 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:16:16.997917   62749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:17.013524   62749 system_svc.go:56] duration metric: took 15.634104ms WaitForService to wait for kubelet
	I0819 18:16:17.013559   62749 kubeadm.go:582] duration metric: took 4m25.695525816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:16:17.013585   62749 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:16:17.016278   62749 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:16:17.016301   62749 node_conditions.go:123] node cpu capacity is 2
	I0819 18:16:17.016315   62749 node_conditions.go:105] duration metric: took 2.723578ms to run NodePressure ...
	I0819 18:16:17.016326   62749 start.go:241] waiting for startup goroutines ...
	I0819 18:16:17.016336   62749 start.go:246] waiting for cluster config update ...
	I0819 18:16:17.016351   62749 start.go:255] writing updated cluster config ...
	I0819 18:16:17.016817   62749 ssh_runner.go:195] Run: rm -f paused
	I0819 18:16:17.063056   62749 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:16:17.065819   62749 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-813424" cluster and "default" namespace by default
	I0819 18:16:14.093007   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:17.164989   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:16.172074   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:18.670402   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:20.671024   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:22.671462   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:26.288975   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:25.175354   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:27.671452   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:29.671496   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:29.357082   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:31.671726   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:33.672458   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:35.437060   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:36.171920   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:38.172318   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:38.513064   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:40.670687   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:42.670858   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.671276   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.589000   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:47.660996   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:47.171302   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:49.171707   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:51.675414   62137 pod_ready.go:103] pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:53.665939   62137 pod_ready.go:82] duration metric: took 4m0.001066956s for pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:53.665969   62137 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-jkvcs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 18:16:53.665994   62137 pod_ready.go:39] duration metric: took 4m12.464901403s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:53.666051   62137 kubeadm.go:597] duration metric: took 4m20.502224967s to restartPrimaryControlPlane
	W0819 18:16:53.666114   62137 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:16:53.666143   62137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:16:53.740978   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:16:56.817027   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:02.892936   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:05.965053   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:12.048961   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:15.116969   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:19.922253   62137 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.256081543s)
	I0819 18:17:19.922334   62137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:19.937012   62137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:17:19.946269   62137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:17:19.955344   62137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:17:19.955363   62137 kubeadm.go:157] found existing configuration files:
	
	I0819 18:17:19.955405   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:17:19.963979   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:17:19.964039   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:17:19.972679   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:17:19.980890   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:17:19.980947   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:17:19.989705   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:17:19.998606   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:17:19.998664   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:17:20.007553   62137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:17:20.016136   62137 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:17:20.016185   62137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:17:20.024827   62137 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:17:20.073205   62137 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:17:20.073284   62137 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:17:20.186906   62137 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:17:20.187034   62137 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:17:20.187125   62137 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:17:20.198750   62137 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:17:20.200704   62137 out.go:235]   - Generating certificates and keys ...
	I0819 18:17:20.200810   62137 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:17:20.200905   62137 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:17:20.201015   62137 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:17:20.201099   62137 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:17:20.201202   62137 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:17:20.201279   62137 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:17:20.201370   62137 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:17:20.201468   62137 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:17:20.201578   62137 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:17:20.201686   62137 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:17:20.201743   62137 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:17:20.201823   62137 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:17:20.386866   62137 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:17:20.483991   62137 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:17:20.575440   62137 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:17:20.704349   62137 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:17:20.834890   62137 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:17:20.835583   62137 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:17:20.839290   62137 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:17:21.197002   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:20.841232   62137 out.go:235]   - Booting up control plane ...
	I0819 18:17:20.841313   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:17:20.841374   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:17:20.841428   62137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:17:20.858185   62137 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:17:20.866369   62137 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:17:20.866447   62137 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:17:20.997302   62137 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:17:20.997435   62137 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:17:21.499506   62137 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041994ms
	I0819 18:17:21.499625   62137 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:17:26.501489   62137 kubeadm.go:310] [api-check] The API server is healthy after 5.002014094s
	I0819 18:17:26.514398   62137 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:17:26.534278   62137 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:17:26.557460   62137 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:17:26.557706   62137 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-233969 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:17:26.569142   62137 kubeadm.go:310] [bootstrap-token] Using token: 2skh80.c6u95wnw3x4gmagv
	I0819 18:17:24.273082   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:26.570814   62137 out.go:235]   - Configuring RBAC rules ...
	I0819 18:17:26.570940   62137 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:17:26.583073   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:17:26.592407   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:17:26.595488   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:17:26.599062   62137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:17:26.603754   62137 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:17:26.908245   62137 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:17:27.340277   62137 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:17:27.909394   62137 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:17:27.912696   62137 kubeadm.go:310] 
	I0819 18:17:27.912811   62137 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:17:27.912834   62137 kubeadm.go:310] 
	I0819 18:17:27.912953   62137 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:17:27.912965   62137 kubeadm.go:310] 
	I0819 18:17:27.912996   62137 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:17:27.913086   62137 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:17:27.913166   62137 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:17:27.913178   62137 kubeadm.go:310] 
	I0819 18:17:27.913246   62137 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:17:27.913266   62137 kubeadm.go:310] 
	I0819 18:17:27.913338   62137 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:17:27.913349   62137 kubeadm.go:310] 
	I0819 18:17:27.913422   62137 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:17:27.913527   62137 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:17:27.913613   62137 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:17:27.913622   62137 kubeadm.go:310] 
	I0819 18:17:27.913727   62137 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:17:27.913827   62137 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:17:27.913842   62137 kubeadm.go:310] 
	I0819 18:17:27.913934   62137 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2skh80.c6u95wnw3x4gmagv \
	I0819 18:17:27.914073   62137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:17:27.914112   62137 kubeadm.go:310] 	--control-plane 
	I0819 18:17:27.914121   62137 kubeadm.go:310] 
	I0819 18:17:27.914223   62137 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:17:27.914235   62137 kubeadm.go:310] 
	I0819 18:17:27.914353   62137 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2skh80.c6u95wnw3x4gmagv \
	I0819 18:17:27.914499   62137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:17:27.916002   62137 kubeadm.go:310] W0819 18:17:20.045306    3048 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:17:27.916280   62137 kubeadm.go:310] W0819 18:17:20.046268    3048 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:17:27.916390   62137 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:17:27.916417   62137 cni.go:84] Creating CNI manager for ""
	I0819 18:17:27.916426   62137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:17:27.918384   62137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:17:27.919646   62137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:17:27.930298   62137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:17:27.946332   62137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:17:27.946440   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:27.946462   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-233969 minikube.k8s.io/updated_at=2024_08_19T18_17_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=no-preload-233969 minikube.k8s.io/primary=true
	I0819 18:17:27.972836   62137 ops.go:34] apiserver oom_adj: -16
	I0819 18:17:28.134899   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:28.635909   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:29.135326   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:29.635339   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:30.135992   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:30.635626   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:31.135493   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:31.635632   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:32.135812   62137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:17:32.208229   62137 kubeadm.go:1113] duration metric: took 4.261865811s to wait for elevateKubeSystemPrivileges
	I0819 18:17:32.208254   62137 kubeadm.go:394] duration metric: took 4m59.094587246s to StartCluster
	I0819 18:17:32.208270   62137 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:17:32.208350   62137 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:17:32.210604   62137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:17:32.210888   62137 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:17:32.210967   62137 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:17:32.211052   62137 addons.go:69] Setting storage-provisioner=true in profile "no-preload-233969"
	I0819 18:17:32.211070   62137 addons.go:69] Setting default-storageclass=true in profile "no-preload-233969"
	I0819 18:17:32.211088   62137 addons.go:234] Setting addon storage-provisioner=true in "no-preload-233969"
	I0819 18:17:32.211084   62137 addons.go:69] Setting metrics-server=true in profile "no-preload-233969"
	W0819 18:17:32.211096   62137 addons.go:243] addon storage-provisioner should already be in state true
	I0819 18:17:32.211102   62137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-233969"
	I0819 18:17:32.211125   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.211126   62137 addons.go:234] Setting addon metrics-server=true in "no-preload-233969"
	W0819 18:17:32.211166   62137 addons.go:243] addon metrics-server should already be in state true
	I0819 18:17:32.211198   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.211124   62137 config.go:182] Loaded profile config "no-preload-233969": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:17:32.211475   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211505   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.211589   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211601   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.211619   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.211623   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.212714   62137 out.go:177] * Verifying Kubernetes components...
	I0819 18:17:32.214075   62137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:17:32.227207   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0819 18:17:32.227219   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0819 18:17:32.227615   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.227709   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.228122   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.228142   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.228216   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.228236   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.228543   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.228610   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.229074   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.229112   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.229120   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.229147   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.230316   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0819 18:17:32.230746   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.231408   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.231437   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.231812   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.232018   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.235965   62137 addons.go:234] Setting addon default-storageclass=true in "no-preload-233969"
	W0819 18:17:32.235986   62137 addons.go:243] addon default-storageclass should already be in state true
	I0819 18:17:32.236013   62137 host.go:66] Checking if "no-preload-233969" exists ...
	I0819 18:17:32.236365   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.236392   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.244668   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0819 18:17:32.245056   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.245506   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.245534   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.245816   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0819 18:17:32.245848   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.245989   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.246239   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.246795   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.246811   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.247182   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.247380   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.248517   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.249498   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.250817   62137 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 18:17:32.251649   62137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:17:30.348988   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:32.252466   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 18:17:32.252483   62137 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 18:17:32.252501   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.253309   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0819 18:17:32.253687   62137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:17:32.253701   62137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:17:32.253717   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.253828   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.254340   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.254352   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.254706   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.255288   62137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:17:32.255324   62137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:17:32.256274   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.256776   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.256796   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.256970   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.257109   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.257229   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.257348   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.257756   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.258132   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.258144   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.258384   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.258531   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.258663   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.258788   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.271706   62137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0819 18:17:32.272115   62137 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:17:32.272558   62137 main.go:141] libmachine: Using API Version  1
	I0819 18:17:32.272575   62137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:17:32.272875   62137 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:17:32.273041   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetState
	I0819 18:17:32.274711   62137 main.go:141] libmachine: (no-preload-233969) Calling .DriverName
	I0819 18:17:32.274914   62137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:17:32.274924   62137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:17:32.274936   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHHostname
	I0819 18:17:32.277689   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.278191   62137 main.go:141] libmachine: (no-preload-233969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:5d:94", ip: ""} in network mk-no-preload-233969: {Iface:virbr2 ExpiryTime:2024-08-19 19:12:07 +0000 UTC Type:0 Mac:52:54:00:99:5d:94 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:no-preload-233969 Clientid:01:52:54:00:99:5d:94}
	I0819 18:17:32.278246   62137 main.go:141] libmachine: (no-preload-233969) DBG | domain no-preload-233969 has defined IP address 192.168.50.8 and MAC address 52:54:00:99:5d:94 in network mk-no-preload-233969
	I0819 18:17:32.278358   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHPort
	I0819 18:17:32.278533   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHKeyPath
	I0819 18:17:32.278701   62137 main.go:141] libmachine: (no-preload-233969) Calling .GetSSHUsername
	I0819 18:17:32.278847   62137 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/no-preload-233969/id_rsa Username:docker}
	I0819 18:17:32.423546   62137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:17:32.445680   62137 node_ready.go:35] waiting up to 6m0s for node "no-preload-233969" to be "Ready" ...
	I0819 18:17:32.471999   62137 node_ready.go:49] node "no-preload-233969" has status "Ready":"True"
	I0819 18:17:32.472028   62137 node_ready.go:38] duration metric: took 26.307315ms for node "no-preload-233969" to be "Ready" ...
	I0819 18:17:32.472041   62137 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:32.478401   62137 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:32.518483   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:17:32.568928   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 18:17:32.568953   62137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 18:17:32.592301   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:17:32.645484   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 18:17:32.645513   62137 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 18:17:32.715522   62137 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:17:32.715552   62137 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 18:17:32.781693   62137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:17:33.756997   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.238477445s)
	I0819 18:17:33.757035   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757044   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757051   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.164710772s)
	I0819 18:17:33.757088   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757101   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757454   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757450   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757466   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757475   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757483   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757490   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757538   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757564   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757616   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.757640   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.757712   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757729   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.757733   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757852   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.757915   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.757937   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.831562   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.831588   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.831891   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.831907   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928005   62137 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146269845s)
	I0819 18:17:33.928064   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.928082   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.928391   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.928438   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.928452   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928465   62137 main.go:141] libmachine: Making call to close driver server
	I0819 18:17:33.928477   62137 main.go:141] libmachine: (no-preload-233969) Calling .Close
	I0819 18:17:33.928809   62137 main.go:141] libmachine: (no-preload-233969) DBG | Closing plugin on server side
	I0819 18:17:33.928820   62137 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:17:33.928835   62137 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:17:33.928851   62137 addons.go:475] Verifying addon metrics-server=true in "no-preload-233969"
	I0819 18:17:33.930974   62137 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0819 18:17:33.932101   62137 addons.go:510] duration metric: took 1.72114773s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0819 18:17:34.486566   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:33.421045   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:36.984891   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:39.484617   62137 pod_ready.go:103] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"False"
	I0819 18:17:39.500962   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:42.572983   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:41.990189   62137 pod_ready.go:93] pod "etcd-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:41.990210   62137 pod_ready.go:82] duration metric: took 9.511780534s for pod "etcd-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.990221   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.997282   62137 pod_ready.go:93] pod "kube-apiserver-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:41.997301   62137 pod_ready.go:82] duration metric: took 7.074393ms for pod "kube-apiserver-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:41.997310   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.008757   62137 pod_ready.go:93] pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.008775   62137 pod_ready.go:82] duration metric: took 11.458424ms for pod "kube-controller-manager-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.008785   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pt5nj" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.017802   62137 pod_ready.go:93] pod "kube-proxy-pt5nj" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.017820   62137 pod_ready.go:82] duration metric: took 9.029628ms for pod "kube-proxy-pt5nj" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.017828   62137 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.025402   62137 pod_ready.go:93] pod "kube-scheduler-no-preload-233969" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:42.025424   62137 pod_ready.go:82] duration metric: took 7.589229ms for pod "kube-scheduler-no-preload-233969" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:42.025433   62137 pod_ready.go:39] duration metric: took 9.553379252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:42.025451   62137 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:17:42.025508   62137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:17:42.043190   62137 api_server.go:72] duration metric: took 9.832267712s to wait for apiserver process to appear ...
	I0819 18:17:42.043214   62137 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:17:42.043231   62137 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I0819 18:17:42.051124   62137 api_server.go:279] https://192.168.50.8:8443/healthz returned 200:
	ok
	I0819 18:17:42.052367   62137 api_server.go:141] control plane version: v1.31.0
	I0819 18:17:42.052392   62137 api_server.go:131] duration metric: took 9.170652ms to wait for apiserver health ...
	I0819 18:17:42.052404   62137 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:17:42.187227   62137 system_pods.go:59] 9 kube-system pods found
	I0819 18:17:42.187254   62137 system_pods.go:61] "coredns-6f6b679f8f-kdrzp" [0db6b602-ca09-40f4-9492-93cbbf919aa7] Running
	I0819 18:17:42.187259   62137 system_pods.go:61] "coredns-6f6b679f8f-vb6dx" [0d39fac8-0d53-4380-a903-080414848e24] Running
	I0819 18:17:42.187263   62137 system_pods.go:61] "etcd-no-preload-233969" [4974eb7f-625a-43fb-b0c4-09cf5b4c0829] Running
	I0819 18:17:42.187267   62137 system_pods.go:61] "kube-apiserver-no-preload-233969" [eb582488-97d8-494c-95ba-6c9e15ff433b] Running
	I0819 18:17:42.187270   62137 system_pods.go:61] "kube-controller-manager-no-preload-233969" [cf978fab-7333-45bb-adc8-127b905707a7] Running
	I0819 18:17:42.187273   62137 system_pods.go:61] "kube-proxy-pt5nj" [dfd68c04-8a56-4a98-a1fb-21a194ebb5e3] Running
	I0819 18:17:42.187277   62137 system_pods.go:61] "kube-scheduler-no-preload-233969" [d08ef733-88fa-43da-92ac-aee974a6c2d2] Running
	I0819 18:17:42.187282   62137 system_pods.go:61] "metrics-server-6867b74b74-bfkkf" [00206622-fe4f-4f26-8f69-ac7fb6a39805] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:17:42.187285   62137 system_pods.go:61] "storage-provisioner" [67a50087-cb20-407b-9a87-03d04d230afb] Running
	I0819 18:17:42.187292   62137 system_pods.go:74] duration metric: took 134.882111ms to wait for pod list to return data ...
	I0819 18:17:42.187299   62137 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:17:42.382612   62137 default_sa.go:45] found service account: "default"
	I0819 18:17:42.382643   62137 default_sa.go:55] duration metric: took 195.337173ms for default service account to be created ...
	I0819 18:17:42.382652   62137 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:17:42.585988   62137 system_pods.go:86] 9 kube-system pods found
	I0819 18:17:42.586024   62137 system_pods.go:89] "coredns-6f6b679f8f-kdrzp" [0db6b602-ca09-40f4-9492-93cbbf919aa7] Running
	I0819 18:17:42.586032   62137 system_pods.go:89] "coredns-6f6b679f8f-vb6dx" [0d39fac8-0d53-4380-a903-080414848e24] Running
	I0819 18:17:42.586038   62137 system_pods.go:89] "etcd-no-preload-233969" [4974eb7f-625a-43fb-b0c4-09cf5b4c0829] Running
	I0819 18:17:42.586044   62137 system_pods.go:89] "kube-apiserver-no-preload-233969" [eb582488-97d8-494c-95ba-6c9e15ff433b] Running
	I0819 18:17:42.586049   62137 system_pods.go:89] "kube-controller-manager-no-preload-233969" [cf978fab-7333-45bb-adc8-127b905707a7] Running
	I0819 18:17:42.586056   62137 system_pods.go:89] "kube-proxy-pt5nj" [dfd68c04-8a56-4a98-a1fb-21a194ebb5e3] Running
	I0819 18:17:42.586062   62137 system_pods.go:89] "kube-scheduler-no-preload-233969" [d08ef733-88fa-43da-92ac-aee974a6c2d2] Running
	I0819 18:17:42.586072   62137 system_pods.go:89] "metrics-server-6867b74b74-bfkkf" [00206622-fe4f-4f26-8f69-ac7fb6a39805] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:17:42.586078   62137 system_pods.go:89] "storage-provisioner" [67a50087-cb20-407b-9a87-03d04d230afb] Running
	I0819 18:17:42.586089   62137 system_pods.go:126] duration metric: took 203.431371ms to wait for k8s-apps to be running ...
	I0819 18:17:42.586101   62137 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:17:42.586154   62137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:42.601268   62137 system_svc.go:56] duration metric: took 15.156104ms WaitForService to wait for kubelet
	I0819 18:17:42.601305   62137 kubeadm.go:582] duration metric: took 10.39038433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:17:42.601330   62137 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:17:42.783030   62137 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:17:42.783058   62137 node_conditions.go:123] node cpu capacity is 2
	I0819 18:17:42.783069   62137 node_conditions.go:105] duration metric: took 181.734608ms to run NodePressure ...
	I0819 18:17:42.783080   62137 start.go:241] waiting for startup goroutines ...
	I0819 18:17:42.783087   62137 start.go:246] waiting for cluster config update ...
	I0819 18:17:42.783097   62137 start.go:255] writing updated cluster config ...
	I0819 18:17:42.783349   62137 ssh_runner.go:195] Run: rm -f paused
	I0819 18:17:42.831445   62137 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:17:42.833881   62137 out.go:177] * Done! kubectl is now configured to use "no-preload-233969" cluster and "default" namespace by default
	I0819 18:17:48.653035   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:51.725070   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:17:57.805043   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:00.881114   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:06.956979   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:09.974002   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:18:09.974108   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:18:09.975602   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:18:09.975650   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:18:09.975736   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:18:09.975861   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:18:09.975993   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:18:09.976086   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:18:09.978023   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:18:09.978100   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:18:09.978157   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:18:09.978230   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:18:09.978281   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:18:09.978358   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:18:09.978408   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:18:09.978466   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:18:09.978529   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:18:09.978645   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:18:09.978758   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:18:09.978816   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:18:09.978890   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:18:09.978973   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:18:09.979046   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:18:09.979138   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:18:09.979191   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:18:09.979339   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:18:09.979438   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:18:09.979503   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:18:09.979595   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:18:10.028995   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:09.981931   63216 out.go:235]   - Booting up control plane ...
	I0819 18:18:09.982014   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:18:09.982087   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:18:09.982142   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:18:09.982213   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:18:09.982378   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:18:09.982432   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:18:09.982491   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982715   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.982914   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.982996   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983204   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983268   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983424   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983485   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:18:09.983646   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:18:09.983656   63216 kubeadm.go:310] 
	I0819 18:18:09.983705   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:18:09.983747   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:18:09.983754   63216 kubeadm.go:310] 
	I0819 18:18:09.983788   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:18:09.983818   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:18:09.983957   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:18:09.983982   63216 kubeadm.go:310] 
	I0819 18:18:09.984089   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:18:09.984119   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:18:09.984175   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:18:09.984186   63216 kubeadm.go:310] 
	I0819 18:18:09.984277   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:18:09.984372   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:18:09.984378   63216 kubeadm.go:310] 
	I0819 18:18:09.984474   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:18:09.984552   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:18:09.984621   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:18:09.984699   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:18:09.984762   63216 kubeadm.go:310] 
	W0819 18:18:09.984832   63216 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0819 18:18:09.984873   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:18:10.439037   63216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:18:10.453739   63216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:18:10.463241   63216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:18:10.463262   63216 kubeadm.go:157] found existing configuration files:
	
	I0819 18:18:10.463313   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:18:10.472407   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:18:10.472467   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:18:10.481297   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:18:10.489478   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:18:10.489542   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:18:10.498042   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.506373   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:18:10.506433   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:18:10.515158   63216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:18:10.523412   63216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:18:10.523483   63216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:18:10.532060   63216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:18:10.746836   63216 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:18:16.109014   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:19.180970   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:25.261041   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:28.333057   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:34.412966   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:37.485036   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:43.565013   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:46.637059   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:52.716967   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:18:55.789060   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:01.869005   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:04.941027   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:11.020989   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:14.093067   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:20.173021   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:23.248974   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:29.324961   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:32.397037   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:38.477031   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:41.549001   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:47.629019   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:50.700996   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:56.781035   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:19:59.853000   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:06.430174   63216 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0819 18:20:06.430256   63216 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0819 18:20:06.431894   63216 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0819 18:20:06.431968   63216 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:20:06.432060   63216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:20:06.432203   63216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:20:06.432334   63216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 18:20:06.432440   63216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:20:06.434250   63216 out.go:235]   - Generating certificates and keys ...
	I0819 18:20:06.434349   63216 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:20:06.434444   63216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:20:06.434563   63216 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:20:06.434623   63216 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:20:06.434717   63216 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:20:06.434805   63216 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:20:06.434894   63216 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:20:06.434974   63216 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:20:06.435052   63216 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:20:06.435135   63216 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:20:06.435204   63216 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:20:06.435288   63216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:20:06.435365   63216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:20:06.435421   63216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:20:06.435474   63216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:20:06.435531   63216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:20:06.435689   63216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:20:06.435781   63216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:20:06.435827   63216 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:20:06.435886   63216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:20:06.437538   63216 out.go:235]   - Booting up control plane ...
	I0819 18:20:06.437678   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:20:06.437771   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:20:06.437852   63216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:20:06.437928   63216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:20:06.438063   63216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 18:20:06.438105   63216 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0819 18:20:06.438164   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438342   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438416   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438568   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438637   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.438821   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.438902   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439167   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439264   63216 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0819 18:20:06.439458   63216 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0819 18:20:06.439472   63216 kubeadm.go:310] 
	I0819 18:20:06.439514   63216 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0819 18:20:06.439547   63216 kubeadm.go:310] 		timed out waiting for the condition
	I0819 18:20:06.439553   63216 kubeadm.go:310] 
	I0819 18:20:06.439583   63216 kubeadm.go:310] 	This error is likely caused by:
	I0819 18:20:06.439626   63216 kubeadm.go:310] 		- The kubelet is not running
	I0819 18:20:06.439732   63216 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0819 18:20:06.439749   63216 kubeadm.go:310] 
	I0819 18:20:06.439873   63216 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0819 18:20:06.439915   63216 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0819 18:20:06.439944   63216 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0819 18:20:06.439952   63216 kubeadm.go:310] 
	I0819 18:20:06.440039   63216 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0819 18:20:06.440106   63216 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0819 18:20:06.440113   63216 kubeadm.go:310] 
	I0819 18:20:06.440252   63216 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0819 18:20:06.440329   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0819 18:20:06.440392   63216 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0819 18:20:06.440458   63216 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0819 18:20:06.440521   63216 kubeadm.go:394] duration metric: took 8m2.012853316s to StartCluster
	I0819 18:20:06.440524   63216 kubeadm.go:310] 
	I0819 18:20:06.440559   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:20:06.440610   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:20:06.481255   63216 cri.go:89] found id: ""
	I0819 18:20:06.481285   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.481297   63216 logs.go:278] No container was found matching "kube-apiserver"
	I0819 18:20:06.481305   63216 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:20:06.481364   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:20:06.516769   63216 cri.go:89] found id: ""
	I0819 18:20:06.516801   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.516811   63216 logs.go:278] No container was found matching "etcd"
	I0819 18:20:06.516818   63216 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:20:06.516933   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:20:06.551964   63216 cri.go:89] found id: ""
	I0819 18:20:06.551998   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.552006   63216 logs.go:278] No container was found matching "coredns"
	I0819 18:20:06.552014   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:20:06.552108   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:20:06.586084   63216 cri.go:89] found id: ""
	I0819 18:20:06.586115   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.586124   63216 logs.go:278] No container was found matching "kube-scheduler"
	I0819 18:20:06.586131   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:20:06.586189   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:20:06.620732   63216 cri.go:89] found id: ""
	I0819 18:20:06.620773   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.620785   63216 logs.go:278] No container was found matching "kube-proxy"
	I0819 18:20:06.620792   63216 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:20:06.620843   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:20:06.659731   63216 cri.go:89] found id: ""
	I0819 18:20:06.659762   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.659772   63216 logs.go:278] No container was found matching "kube-controller-manager"
	I0819 18:20:06.659780   63216 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:20:06.659846   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:20:06.694223   63216 cri.go:89] found id: ""
	I0819 18:20:06.694257   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.694267   63216 logs.go:278] No container was found matching "kindnet"
	I0819 18:20:06.694275   63216 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 18:20:06.694337   63216 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 18:20:06.727474   63216 cri.go:89] found id: ""
	I0819 18:20:06.727508   63216 logs.go:276] 0 containers: []
	W0819 18:20:06.727518   63216 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0819 18:20:06.727528   63216 logs.go:123] Gathering logs for kubelet ...
	I0819 18:20:06.727538   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:20:06.778006   63216 logs.go:123] Gathering logs for dmesg ...
	I0819 18:20:06.778041   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:20:06.792059   63216 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:20:06.792089   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0819 18:20:06.863596   63216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0819 18:20:06.863625   63216 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:20:06.863637   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:20:06.979710   63216 logs.go:123] Gathering logs for container status ...
	I0819 18:20:06.979752   63216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0819 18:20:07.030879   63216 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0819 18:20:07.030930   63216 out.go:270] * 
	W0819 18:20:07.031004   63216 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.031025   63216 out.go:270] * 
	W0819 18:20:07.031896   63216 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 18:20:07.035220   63216 out.go:201] 
	W0819 18:20:07.036384   63216 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0819 18:20:07.036435   63216 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0819 18:20:07.036466   63216 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0819 18:20:07.037783   63216 out.go:201] 
	I0819 18:20:05.933003   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:09.009065   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:15.085040   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:18.160990   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:24.236968   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:27.308959   66229 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.181:22: connect: no route to host
	I0819 18:20:30.310609   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:20:30.310648   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:30.310938   66229 buildroot.go:166] provisioning hostname "embed-certs-306581"
	I0819 18:20:30.310975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:30.311173   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:30.312703   66229 machine.go:96] duration metric: took 4m37.4225796s to provisionDockerMachine
	I0819 18:20:30.312767   66229 fix.go:56] duration metric: took 4m37.446430724s for fixHost
	I0819 18:20:30.312775   66229 start.go:83] releasing machines lock for "embed-certs-306581", held for 4m37.446469265s
	W0819 18:20:30.312789   66229 start.go:714] error starting host: provision: host is not running
	W0819 18:20:30.312878   66229 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0819 18:20:30.312887   66229 start.go:729] Will try again in 5 seconds ...
	I0819 18:20:35.313124   66229 start.go:360] acquireMachinesLock for embed-certs-306581: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:20:35.313223   66229 start.go:364] duration metric: took 60.186µs to acquireMachinesLock for "embed-certs-306581"
	I0819 18:20:35.313247   66229 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:20:35.313256   66229 fix.go:54] fixHost starting: 
	I0819 18:20:35.313555   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:20:35.313581   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:20:35.330972   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38153
	I0819 18:20:35.331433   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:20:35.331878   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:20:35.331897   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:20:35.332189   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:20:35.332376   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:35.332546   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:20:35.334335   66229 fix.go:112] recreateIfNeeded on embed-certs-306581: state=Stopped err=<nil>
	I0819 18:20:35.334360   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	W0819 18:20:35.334529   66229 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:20:35.336031   66229 out.go:177] * Restarting existing kvm2 VM for "embed-certs-306581" ...
	I0819 18:20:35.337027   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Start
	I0819 18:20:35.337166   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring networks are active...
	I0819 18:20:35.337905   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring network default is active
	I0819 18:20:35.338212   66229 main.go:141] libmachine: (embed-certs-306581) Ensuring network mk-embed-certs-306581 is active
	I0819 18:20:35.338534   66229 main.go:141] libmachine: (embed-certs-306581) Getting domain xml...
	I0819 18:20:35.339265   66229 main.go:141] libmachine: (embed-certs-306581) Creating domain...
	I0819 18:20:36.576142   66229 main.go:141] libmachine: (embed-certs-306581) Waiting to get IP...
	I0819 18:20:36.577067   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:36.577471   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:36.577553   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:36.577459   67882 retry.go:31] will retry after 288.282156ms: waiting for machine to come up
	I0819 18:20:36.866897   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:36.867437   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:36.867507   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:36.867415   67882 retry.go:31] will retry after 357.773556ms: waiting for machine to come up
	I0819 18:20:37.227139   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:37.227672   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:37.227697   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:37.227620   67882 retry.go:31] will retry after 360.777442ms: waiting for machine to come up
	I0819 18:20:37.590245   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:37.590696   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:37.590725   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:37.590672   67882 retry.go:31] will retry after 502.380794ms: waiting for machine to come up
	I0819 18:20:38.094422   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:38.094938   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:38.094963   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:38.094893   67882 retry.go:31] will retry after 716.370935ms: waiting for machine to come up
	I0819 18:20:38.812922   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:38.813416   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:38.813437   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:38.813381   67882 retry.go:31] will retry after 728.320282ms: waiting for machine to come up
	I0819 18:20:39.543316   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:39.543705   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:39.543731   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:39.543668   67882 retry.go:31] will retry after 725.532345ms: waiting for machine to come up
	I0819 18:20:40.270826   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:40.271325   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:40.271347   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:40.271280   67882 retry.go:31] will retry after 1.054064107s: waiting for machine to come up
	I0819 18:20:41.326463   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:41.326952   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:41.326983   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:41.326896   67882 retry.go:31] will retry after 1.258426337s: waiting for machine to come up
	I0819 18:20:42.587252   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:42.587685   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:42.587715   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:42.587645   67882 retry.go:31] will retry after 1.884128664s: waiting for machine to come up
	I0819 18:20:44.474042   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:44.474569   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:44.474592   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:44.474528   67882 retry.go:31] will retry after 2.484981299s: waiting for machine to come up
	I0819 18:20:46.961480   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:46.961991   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:46.962010   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:46.961956   67882 retry.go:31] will retry after 2.912321409s: waiting for machine to come up
	I0819 18:20:49.877938   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:49.878388   66229 main.go:141] libmachine: (embed-certs-306581) DBG | unable to find current IP address of domain embed-certs-306581 in network mk-embed-certs-306581
	I0819 18:20:49.878414   66229 main.go:141] libmachine: (embed-certs-306581) DBG | I0819 18:20:49.878347   67882 retry.go:31] will retry after 4.020459132s: waiting for machine to come up
	I0819 18:20:53.901782   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.902239   66229 main.go:141] libmachine: (embed-certs-306581) Found IP for machine: 192.168.72.181
	I0819 18:20:53.902260   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has current primary IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.902266   66229 main.go:141] libmachine: (embed-certs-306581) Reserving static IP address...
	I0819 18:20:53.902757   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "embed-certs-306581", mac: "52:54:00:a4:c5:6a", ip: "192.168.72.181"} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:53.902779   66229 main.go:141] libmachine: (embed-certs-306581) DBG | skip adding static IP to network mk-embed-certs-306581 - found existing host DHCP lease matching {name: "embed-certs-306581", mac: "52:54:00:a4:c5:6a", ip: "192.168.72.181"}
	I0819 18:20:53.902789   66229 main.go:141] libmachine: (embed-certs-306581) Reserved static IP address: 192.168.72.181
	I0819 18:20:53.902800   66229 main.go:141] libmachine: (embed-certs-306581) Waiting for SSH to be available...
	I0819 18:20:53.902808   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Getting to WaitForSSH function...
	I0819 18:20:53.904907   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.905284   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:53.905316   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:53.905407   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Using SSH client type: external
	I0819 18:20:53.905434   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa (-rw-------)
	I0819 18:20:53.905466   66229 main.go:141] libmachine: (embed-certs-306581) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:20:53.905481   66229 main.go:141] libmachine: (embed-certs-306581) DBG | About to run SSH command:
	I0819 18:20:53.905493   66229 main.go:141] libmachine: (embed-certs-306581) DBG | exit 0
	I0819 18:20:54.024614   66229 main.go:141] libmachine: (embed-certs-306581) DBG | SSH cmd err, output: <nil>: 
	I0819 18:20:54.024991   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetConfigRaw
	I0819 18:20:54.025614   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:54.028496   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.028901   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.028935   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.029207   66229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/config.json ...
	I0819 18:20:54.029412   66229 machine.go:93] provisionDockerMachine start ...
	I0819 18:20:54.029430   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:54.029630   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.032073   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.032436   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.032463   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.032647   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.032822   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.033002   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.033136   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.033284   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.033483   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.033498   66229 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:20:54.132908   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 18:20:54.132938   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.133214   66229 buildroot.go:166] provisioning hostname "embed-certs-306581"
	I0819 18:20:54.133238   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.133426   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.135967   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.136324   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.136356   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.136507   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.136713   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.136873   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.137028   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.137215   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.137423   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.137437   66229 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-306581 && echo "embed-certs-306581" | sudo tee /etc/hostname
	I0819 18:20:54.250819   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-306581
	
	I0819 18:20:54.250849   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.253776   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.254119   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.254150   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.254351   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.254574   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.254718   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.254872   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.255090   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.255269   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.255286   66229 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-306581' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-306581/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-306581' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:20:54.361268   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:20:54.361300   66229 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 18:20:54.361328   66229 buildroot.go:174] setting up certificates
	I0819 18:20:54.361342   66229 provision.go:84] configureAuth start
	I0819 18:20:54.361359   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetMachineName
	I0819 18:20:54.361630   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:54.364099   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.364511   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.364541   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.364666   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.366912   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.367301   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.367329   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.367447   66229 provision.go:143] copyHostCerts
	I0819 18:20:54.367496   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 18:20:54.367515   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 18:20:54.367586   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 18:20:54.367687   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 18:20:54.367699   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 18:20:54.367737   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 18:20:54.367824   66229 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 18:20:54.367834   66229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 18:20:54.367860   66229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 18:20:54.367919   66229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.embed-certs-306581 san=[127.0.0.1 192.168.72.181 embed-certs-306581 localhost minikube]
	I0819 18:20:54.424019   66229 provision.go:177] copyRemoteCerts
	I0819 18:20:54.424075   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:20:54.424096   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.426737   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.426994   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.427016   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.427171   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.427380   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.427523   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.427645   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:54.506517   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:20:54.530454   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 18:20:54.552740   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 18:20:54.574870   66229 provision.go:87] duration metric: took 213.51055ms to configureAuth
	I0819 18:20:54.574904   66229 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:20:54.575077   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:20:54.575213   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.577946   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.578283   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.578312   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.578484   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.578683   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.578878   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.578993   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.579122   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.579267   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.579281   66229 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:20:54.825788   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:20:54.825815   66229 machine.go:96] duration metric: took 796.390996ms to provisionDockerMachine
	I0819 18:20:54.825826   66229 start.go:293] postStartSetup for "embed-certs-306581" (driver="kvm2")
	I0819 18:20:54.825836   66229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:20:54.825850   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:54.826187   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:20:54.826214   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.829048   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.829433   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.829462   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.829582   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.829819   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.829963   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.830093   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:54.911609   66229 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:20:54.915894   66229 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:20:54.915916   66229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 18:20:54.915979   66229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 18:20:54.916049   66229 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 18:20:54.916134   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:20:54.926185   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:20:54.952362   66229 start.go:296] duration metric: took 126.500839ms for postStartSetup
	I0819 18:20:54.952401   66229 fix.go:56] duration metric: took 19.639145598s for fixHost
	I0819 18:20:54.952420   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:54.955522   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.955881   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:54.955909   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:54.956078   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:54.956270   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.956450   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:54.956605   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:54.956785   66229 main.go:141] libmachine: Using SSH client type: native
	I0819 18:20:54.956940   66229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.181 22 <nil> <nil>}
	I0819 18:20:54.956950   66229 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:20:55.053204   66229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724091655.030704823
	
	I0819 18:20:55.053229   66229 fix.go:216] guest clock: 1724091655.030704823
	I0819 18:20:55.053237   66229 fix.go:229] Guest: 2024-08-19 18:20:55.030704823 +0000 UTC Remote: 2024-08-19 18:20:54.952405352 +0000 UTC m=+302.228892640 (delta=78.299471ms)
	I0819 18:20:55.053254   66229 fix.go:200] guest clock delta is within tolerance: 78.299471ms
	I0819 18:20:55.053261   66229 start.go:83] releasing machines lock for "embed-certs-306581", held for 19.740028573s
	I0819 18:20:55.053277   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.053530   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:55.056146   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.056523   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.056546   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.056677   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057135   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057320   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:20:55.057404   66229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:20:55.057445   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:55.057497   66229 ssh_runner.go:195] Run: cat /version.json
	I0819 18:20:55.057518   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:20:55.059944   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.059969   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060265   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.060296   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060359   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:55.060407   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:55.060416   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:55.060528   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:20:55.060605   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:55.060672   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:20:55.060778   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:55.060838   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:20:55.060899   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:55.060941   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:20:55.183438   66229 ssh_runner.go:195] Run: systemctl --version
	I0819 18:20:55.189341   66229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:20:55.330628   66229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:20:55.336807   66229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:20:55.336877   66229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:20:55.351865   66229 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:20:55.351893   66229 start.go:495] detecting cgroup driver to use...
	I0819 18:20:55.351988   66229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:20:55.368983   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:20:55.382795   66229 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:20:55.382848   66229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:20:55.396175   66229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:20:55.409333   66229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:20:55.534054   66229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:20:55.685410   66229 docker.go:233] disabling docker service ...
	I0819 18:20:55.685483   66229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:20:55.699743   66229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:20:55.712425   66229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:20:55.842249   66229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:20:55.964126   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:20:55.978354   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:20:55.995963   66229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:20:55.996032   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.006717   66229 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:20:56.006810   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.017350   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.027098   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.037336   66229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:20:56.047188   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.059128   66229 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.076950   66229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:20:56.087819   66229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:20:56.097922   66229 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:20:56.097980   66229 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:20:56.114569   66229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:20:56.130215   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:20:56.243812   66229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:20:56.376166   66229 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:20:56.376294   66229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:20:56.380916   66229 start.go:563] Will wait 60s for crictl version
	I0819 18:20:56.380973   66229 ssh_runner.go:195] Run: which crictl
	I0819 18:20:56.384492   66229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:20:56.421992   66229 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:20:56.422058   66229 ssh_runner.go:195] Run: crio --version
	I0819 18:20:56.448657   66229 ssh_runner.go:195] Run: crio --version
	I0819 18:20:56.477627   66229 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:20:56.479098   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetIP
	I0819 18:20:56.482364   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:56.482757   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:20:56.482800   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:20:56.482997   66229 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0819 18:20:56.486798   66229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:20:56.498662   66229 kubeadm.go:883] updating cluster {Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:20:56.498820   66229 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:20:56.498890   66229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:20:56.534076   66229 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 18:20:56.534137   66229 ssh_runner.go:195] Run: which lz4
	I0819 18:20:56.537906   66229 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:20:56.541691   66229 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:20:56.541726   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 18:20:57.728202   66229 crio.go:462] duration metric: took 1.190335452s to copy over tarball
	I0819 18:20:57.728263   66229 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:20:59.870389   66229 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.142096936s)
	I0819 18:20:59.870434   66229 crio.go:469] duration metric: took 2.142210052s to extract the tarball
	I0819 18:20:59.870443   66229 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:20:59.907013   66229 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:20:59.949224   66229 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:20:59.949244   66229 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:20:59.949257   66229 kubeadm.go:934] updating node { 192.168.72.181 8443 v1.31.0 crio true true} ...
	I0819 18:20:59.949790   66229 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-306581 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:20:59.949851   66229 ssh_runner.go:195] Run: crio config
	I0819 18:20:59.993491   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:20:59.993521   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:20:59.993535   66229 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:20:59.993561   66229 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.181 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-306581 NodeName:embed-certs-306581 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:20:59.993735   66229 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-306581"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:20:59.993814   66229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:21:00.003488   66229 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:21:00.003563   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:21:00.012546   66229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0819 18:21:00.028546   66229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:21:00.044037   66229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0819 18:21:00.059422   66229 ssh_runner.go:195] Run: grep 192.168.72.181	control-plane.minikube.internal$ /etc/hosts
	I0819 18:21:00.062992   66229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:21:00.075172   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:21:00.213050   66229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:21:00.230086   66229 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581 for IP: 192.168.72.181
	I0819 18:21:00.230114   66229 certs.go:194] generating shared ca certs ...
	I0819 18:21:00.230135   66229 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:21:00.230303   66229 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 18:21:00.230371   66229 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 18:21:00.230386   66229 certs.go:256] generating profile certs ...
	I0819 18:21:00.230506   66229 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/client.key
	I0819 18:21:00.230593   66229 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.key.cf6a9e5e
	I0819 18:21:00.230652   66229 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.key
	I0819 18:21:00.230819   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 18:21:00.230863   66229 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 18:21:00.230877   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:21:00.230912   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:21:00.230951   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:21:00.230985   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 18:21:00.231053   66229 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:21:00.231968   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:21:00.265793   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:21:00.292911   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:21:00.333617   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:21:00.361258   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0819 18:21:00.394711   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 18:21:00.417880   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:21:00.440771   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/embed-certs-306581/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 18:21:00.464416   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 18:21:00.489641   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 18:21:00.512135   66229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:21:00.535608   66229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:21:00.552131   66229 ssh_runner.go:195] Run: openssl version
	I0819 18:21:00.557821   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 18:21:00.568710   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.573178   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.573239   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 18:21:00.578820   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:21:00.589649   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:21:00.600652   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.604986   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.605049   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:21:00.610552   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:21:00.620514   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 18:21:00.630217   66229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.634541   66229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.634599   66229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 18:21:00.639839   66229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 18:21:00.649821   66229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:21:00.654288   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:21:00.660071   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:21:00.665354   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:21:00.670791   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:21:00.676451   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:21:00.682099   66229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:21:00.687792   66229 kubeadm.go:392] StartCluster: {Name:embed-certs-306581 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-306581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:21:00.687869   66229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:21:00.687914   66229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:21:00.730692   66229 cri.go:89] found id: ""
	I0819 18:21:00.730762   66229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:21:00.740607   66229 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 18:21:00.740627   66229 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 18:21:00.740687   66229 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 18:21:00.750127   66229 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:21:00.751927   66229 kubeconfig.go:125] found "embed-certs-306581" server: "https://192.168.72.181:8443"
	I0819 18:21:00.754865   66229 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 18:21:00.764102   66229 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.181
	I0819 18:21:00.764130   66229 kubeadm.go:1160] stopping kube-system containers ...
	I0819 18:21:00.764142   66229 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0819 18:21:00.764210   66229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:21:00.797866   66229 cri.go:89] found id: ""
	I0819 18:21:00.797939   66229 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 18:21:00.815065   66229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:21:00.824279   66229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:21:00.824297   66229 kubeadm.go:157] found existing configuration files:
	
	I0819 18:21:00.824336   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:21:00.832688   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:21:00.832766   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:21:00.841795   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:21:00.852300   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:21:00.852358   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:21:00.862973   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:21:00.873195   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:21:00.873243   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:21:00.882559   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:21:00.892687   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:21:00.892774   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:21:00.903746   66229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:21:00.913161   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:01.017511   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:01.829503   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.047620   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.105126   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:02.157817   66229 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:21:02.157927   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:02.658716   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:03.158468   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:03.658865   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:04.157979   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:21:04.175682   66229 api_server.go:72] duration metric: took 2.017872037s to wait for apiserver process to appear ...
	I0819 18:21:04.175711   66229 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:21:04.175731   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.251226   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 18:21:07.251253   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 18:21:07.251265   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.290762   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0819 18:21:07.290788   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0819 18:21:07.676347   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:07.695167   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:21:07.695220   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:21:08.176382   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:08.183772   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:21:08.183816   66229 api_server.go:103] status: https://192.168.72.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:21:08.676435   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:21:08.680898   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 200:
	ok
	I0819 18:21:08.686996   66229 api_server.go:141] control plane version: v1.31.0
	I0819 18:21:08.687023   66229 api_server.go:131] duration metric: took 4.511304673s to wait for apiserver health ...
	I0819 18:21:08.687031   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:21:08.687037   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:21:08.688988   66229 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:21:08.690213   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:21:08.701051   66229 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:21:08.719754   66229 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:21:08.732139   66229 system_pods.go:59] 8 kube-system pods found
	I0819 18:21:08.732172   66229 system_pods.go:61] "coredns-6f6b679f8f-222n6" [1d55fb75-011d-4517-8601-b55ff22d0fe1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 18:21:08.732179   66229 system_pods.go:61] "etcd-embed-certs-306581" [0b299b0b-00ec-45d6-9e5f-6f8677734138] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0819 18:21:08.732187   66229 system_pods.go:61] "kube-apiserver-embed-certs-306581" [c0342f0d-3e9b-4118-abcb-e6585ec8205c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 18:21:08.732192   66229 system_pods.go:61] "kube-controller-manager-embed-certs-306581" [3e8441b3-f3cc-4e0b-9e9b-2dc1fd41ca1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 18:21:08.732196   66229 system_pods.go:61] "kube-proxy-4vt6x" [559e4638-9505-4d7f-b84e-77b813c84ab4] Running
	I0819 18:21:08.732204   66229 system_pods.go:61] "kube-scheduler-embed-certs-306581" [39ec99a8-3e38-40f6-af5e-02a437573bd2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0819 18:21:08.732210   66229 system_pods.go:61] "metrics-server-6867b74b74-dmpfh" [0edd2d8d-aa29-4817-babb-09e185fc0578] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:21:08.732213   66229 system_pods.go:61] "storage-provisioner" [f267a05a-418f-49a9-b09d-a6330ffa4abf] Running
	I0819 18:21:08.732219   66229 system_pods.go:74] duration metric: took 12.445292ms to wait for pod list to return data ...
	I0819 18:21:08.732226   66229 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:21:08.735979   66229 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:21:08.736004   66229 node_conditions.go:123] node cpu capacity is 2
	I0819 18:21:08.736015   66229 node_conditions.go:105] duration metric: took 3.784963ms to run NodePressure ...
	I0819 18:21:08.736029   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 18:21:08.995746   66229 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0819 18:21:09.001567   66229 kubeadm.go:739] kubelet initialised
	I0819 18:21:09.001592   66229 kubeadm.go:740] duration metric: took 5.816928ms waiting for restarted kubelet to initialise ...
	I0819 18:21:09.001603   66229 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:21:09.006253   66229 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:11.015091   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:13.512551   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:15.512696   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:16.513342   66229 pod_ready.go:93] pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:16.513387   66229 pod_ready.go:82] duration metric: took 7.507092015s for pod "coredns-6f6b679f8f-222n6" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:16.513404   66229 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.519842   66229 pod_ready.go:93] pod "etcd-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:17.519864   66229 pod_ready.go:82] duration metric: took 1.006452738s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.519873   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.524383   66229 pod_ready.go:93] pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:17.524401   66229 pod_ready.go:82] duration metric: took 4.522465ms for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:17.524411   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:19.536012   66229 pod_ready.go:103] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:22.030530   66229 pod_ready.go:103] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:23.530792   66229 pod_ready.go:93] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.530818   66229 pod_ready.go:82] duration metric: took 6.006401322s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.530828   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4vt6x" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.535011   66229 pod_ready.go:93] pod "kube-proxy-4vt6x" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.535030   66229 pod_ready.go:82] duration metric: took 4.196825ms for pod "kube-proxy-4vt6x" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.535038   66229 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.538712   66229 pod_ready.go:93] pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:21:23.538731   66229 pod_ready.go:82] duration metric: took 3.686091ms for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:23.538743   66229 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace to be "Ready" ...
	I0819 18:21:25.545068   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:28.044531   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:30.044724   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:32.545647   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:35.044620   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:37.044937   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:39.045319   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:41.545155   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:43.545946   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:46.045829   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:48.544436   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:50.546582   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:53.045122   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:55.544595   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:21:57.544701   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:00.044887   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:02.044950   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:04.544241   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:06.546130   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:09.044418   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:11.045634   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:13.545020   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:16.045408   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:18.544890   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:21.044294   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:23.045251   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:25.545598   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:27.545703   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:30.044377   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:32.045041   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:34.045316   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:36.045466   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:38.543870   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:40.544216   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:42.545271   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:45.044619   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:47.045364   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:49.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:51.045992   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:53.544682   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:56.045091   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:22:58.045324   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:00.046083   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:02.545541   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:05.045078   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:07.544235   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:09.545586   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:12.045449   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:14.545054   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:16.545253   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:19.044239   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:21.045012   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:23.045831   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:25.545703   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:28.045069   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:30.045417   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:32.545986   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:35.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:37.545427   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:39.545715   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:42.046173   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:44.545426   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:46.545560   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:48.546489   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:51.044803   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:53.044925   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:55.544871   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:23:57.545044   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:00.044157   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:02.045599   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:04.546054   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:07.044956   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:09.044993   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:11.045233   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:13.046097   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:15.046223   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:17.544258   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:19.545890   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:22.044892   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:24.045926   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:26.545100   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:29.044231   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:31.044942   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:33.545660   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:36.045482   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:38.545467   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:40.545731   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:43.045524   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:45.545299   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:48.044040   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:50.044556   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:52.046009   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:54.545370   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:57.044344   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:24:59.544590   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:02.045528   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:04.546831   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:07.045865   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:09.544718   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:12.044142   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:14.045777   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:16.048107   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:18.545087   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:21.044910   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:23.045553   66229 pod_ready.go:103] pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace has status "Ready":"False"
	I0819 18:25:23.539885   66229 pod_ready.go:82] duration metric: took 4m0.001128118s for pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace to be "Ready" ...
	E0819 18:25:23.539910   66229 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dmpfh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0819 18:25:23.539927   66229 pod_ready.go:39] duration metric: took 4m14.538313663s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:25:23.539953   66229 kubeadm.go:597] duration metric: took 4m22.799312728s to restartPrimaryControlPlane
	W0819 18:25:23.540007   66229 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 18:25:23.540040   66229 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0819 18:25:49.757089   66229 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.217024974s)
	I0819 18:25:49.757162   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:25:49.771550   66229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:25:49.780916   66229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:25:49.789732   66229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:25:49.789751   66229 kubeadm.go:157] found existing configuration files:
	
	I0819 18:25:49.789796   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:25:49.798373   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:25:49.798436   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:25:49.807148   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:25:49.815466   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:25:49.815528   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:25:49.824320   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:25:49.832472   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:25:49.832523   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:25:49.841050   66229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:25:49.849186   66229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:25:49.849243   66229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:25:49.857711   66229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:25:49.904029   66229 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:25:49.904211   66229 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:25:50.021095   66229 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:25:50.021242   66229 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:25:50.021399   66229 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:25:50.031925   66229 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:25:50.033989   66229 out.go:235]   - Generating certificates and keys ...
	I0819 18:25:50.034080   66229 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:25:50.034163   66229 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:25:50.034236   66229 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 18:25:50.034287   66229 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 18:25:50.034345   66229 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 18:25:50.034392   66229 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 18:25:50.034460   66229 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 18:25:50.034568   66229 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 18:25:50.034679   66229 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 18:25:50.034796   66229 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 18:25:50.034869   66229 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 18:25:50.034950   66229 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:25:50.135488   66229 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:25:50.189286   66229 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:25:50.602494   66229 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:25:50.752478   66229 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:25:51.009355   66229 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:25:51.009947   66229 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:25:51.012443   66229 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:25:51.014364   66229 out.go:235]   - Booting up control plane ...
	I0819 18:25:51.014506   66229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:25:51.014618   66229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:25:51.014884   66229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:25:51.033153   66229 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:25:51.040146   66229 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:25:51.040228   66229 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:25:51.167821   66229 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:25:51.167952   66229 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:25:52.171536   66229 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003657825s
	I0819 18:25:52.171661   66229 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:25:56.673902   66229 kubeadm.go:310] [api-check] The API server is healthy after 4.502200468s
	I0819 18:25:56.700202   66229 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:25:56.718381   66229 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:25:56.745000   66229 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:25:56.745278   66229 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-306581 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:25:56.759094   66229 kubeadm.go:310] [bootstrap-token] Using token: abvjrz.7whl2a0axm001wrp
	I0819 18:25:56.760573   66229 out.go:235]   - Configuring RBAC rules ...
	I0819 18:25:56.760724   66229 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:25:56.766575   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:25:56.780740   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:25:56.784467   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:25:56.788245   66229 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:25:56.792110   66229 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:25:57.088316   66229 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:25:57.528128   66229 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:25:58.088280   66229 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:25:58.088324   66229 kubeadm.go:310] 
	I0819 18:25:58.088398   66229 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:25:58.088425   66229 kubeadm.go:310] 
	I0819 18:25:58.088559   66229 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:25:58.088585   66229 kubeadm.go:310] 
	I0819 18:25:58.088633   66229 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:25:58.088726   66229 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:25:58.088883   66229 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:25:58.088904   66229 kubeadm.go:310] 
	I0819 18:25:58.088983   66229 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:25:58.088996   66229 kubeadm.go:310] 
	I0819 18:25:58.089083   66229 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:25:58.089109   66229 kubeadm.go:310] 
	I0819 18:25:58.089185   66229 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:25:58.089294   66229 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:25:58.089419   66229 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:25:58.089441   66229 kubeadm.go:310] 
	I0819 18:25:58.089557   66229 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:25:58.089669   66229 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:25:58.089681   66229 kubeadm.go:310] 
	I0819 18:25:58.089798   66229 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token abvjrz.7whl2a0axm001wrp \
	I0819 18:25:58.089961   66229 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:25:58.089995   66229 kubeadm.go:310] 	--control-plane 
	I0819 18:25:58.090005   66229 kubeadm.go:310] 
	I0819 18:25:58.090134   66229 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:25:58.090146   66229 kubeadm.go:310] 
	I0819 18:25:58.090270   66229 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token abvjrz.7whl2a0axm001wrp \
	I0819 18:25:58.090418   66229 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:25:58.091186   66229 kubeadm.go:310] W0819 18:25:49.877896    2533 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:25:58.091610   66229 kubeadm.go:310] W0819 18:25:49.879026    2533 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:25:58.091792   66229 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:25:58.091814   66229 cni.go:84] Creating CNI manager for ""
	I0819 18:25:58.091824   66229 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 18:25:58.093554   66229 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:25:58.094739   66229 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:25:58.105125   66229 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:25:58.123435   66229 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:25:58.123526   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:58.123532   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-306581 minikube.k8s.io/updated_at=2024_08_19T18_25_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=embed-certs-306581 minikube.k8s.io/primary=true
	I0819 18:25:58.148101   66229 ops.go:34] apiserver oom_adj: -16
	I0819 18:25:58.298505   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:58.799549   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:59.299523   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:25:59.798660   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:00.299282   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:00.799040   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:01.298647   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:01.798822   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:02.299035   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:02.798965   66229 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:26:02.914076   66229 kubeadm.go:1113] duration metric: took 4.790608101s to wait for elevateKubeSystemPrivileges
	I0819 18:26:02.914111   66229 kubeadm.go:394] duration metric: took 5m2.226323065s to StartCluster
	I0819 18:26:02.914132   66229 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:26:02.914214   66229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:26:02.915798   66229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:26:02.916048   66229 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.181 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:26:02.916134   66229 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:26:02.916258   66229 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:26:02.916269   66229 addons.go:69] Setting default-storageclass=true in profile "embed-certs-306581"
	I0819 18:26:02.916257   66229 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-306581"
	I0819 18:26:02.916310   66229 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-306581"
	I0819 18:26:02.916342   66229 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-306581"
	I0819 18:26:02.916344   66229 addons.go:69] Setting metrics-server=true in profile "embed-certs-306581"
	W0819 18:26:02.916356   66229 addons.go:243] addon storage-provisioner should already be in state true
	I0819 18:26:02.916376   66229 addons.go:234] Setting addon metrics-server=true in "embed-certs-306581"
	I0819 18:26:02.916382   66229 host.go:66] Checking if "embed-certs-306581" exists ...
	W0819 18:26:02.916389   66229 addons.go:243] addon metrics-server should already be in state true
	I0819 18:26:02.916427   66229 host.go:66] Checking if "embed-certs-306581" exists ...
	I0819 18:26:02.916763   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.916775   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.916792   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.916805   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.916827   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.916852   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.918733   66229 out.go:177] * Verifying Kubernetes components...
	I0819 18:26:02.920207   66229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:26:02.936535   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I0819 18:26:02.936877   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41383
	I0819 18:26:02.937025   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39935
	I0819 18:26:02.937128   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.937375   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.937485   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.937675   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.937698   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.937939   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.937951   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.937960   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.937965   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.938038   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.938285   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.938328   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.938442   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.938611   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.938640   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.938821   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.938859   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.942730   66229 addons.go:234] Setting addon default-storageclass=true in "embed-certs-306581"
	W0819 18:26:02.942783   66229 addons.go:243] addon default-storageclass should already be in state true
	I0819 18:26:02.942825   66229 host.go:66] Checking if "embed-certs-306581" exists ...
	I0819 18:26:02.945808   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.945841   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.959554   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44027
	I0819 18:26:02.959555   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0819 18:26:02.959950   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.960062   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.960479   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.960499   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.960634   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.960650   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.960793   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
	I0819 18:26:02.960976   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.961044   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.961090   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.961157   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.961205   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.961550   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.961571   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.961889   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.962444   66229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:26:02.962471   66229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:26:02.963100   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:26:02.963295   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:26:02.965320   66229 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:26:02.965389   66229 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 18:26:02.966795   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 18:26:02.966816   66229 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 18:26:02.966835   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:26:02.966935   66229 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:26:02.966956   66229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:26:02.966975   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:26:02.970428   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.970527   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.970751   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:26:02.970771   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.971025   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:26:02.971047   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.971053   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:26:02.971198   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:26:02.971210   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:26:02.971364   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:26:02.971407   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:26:02.971526   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:26:02.971577   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:26:02.971704   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:26:02.978868   66229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0819 18:26:02.979249   66229 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:26:02.979716   66229 main.go:141] libmachine: Using API Version  1
	I0819 18:26:02.979734   66229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:26:02.980120   66229 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:26:02.980329   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetState
	I0819 18:26:02.982092   66229 main.go:141] libmachine: (embed-certs-306581) Calling .DriverName
	I0819 18:26:02.982322   66229 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:26:02.982337   66229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:26:02.982356   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHHostname
	I0819 18:26:02.984740   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.985154   66229 main.go:141] libmachine: (embed-certs-306581) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:c5:6a", ip: ""} in network mk-embed-certs-306581: {Iface:virbr4 ExpiryTime:2024-08-19 19:20:45 +0000 UTC Type:0 Mac:52:54:00:a4:c5:6a Iaid: IPaddr:192.168.72.181 Prefix:24 Hostname:embed-certs-306581 Clientid:01:52:54:00:a4:c5:6a}
	I0819 18:26:02.985175   66229 main.go:141] libmachine: (embed-certs-306581) DBG | domain embed-certs-306581 has defined IP address 192.168.72.181 and MAC address 52:54:00:a4:c5:6a in network mk-embed-certs-306581
	I0819 18:26:02.985411   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHPort
	I0819 18:26:02.985583   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHKeyPath
	I0819 18:26:02.985734   66229 main.go:141] libmachine: (embed-certs-306581) Calling .GetSSHUsername
	I0819 18:26:02.985861   66229 sshutil.go:53] new ssh client: &{IP:192.168.72.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/embed-certs-306581/id_rsa Username:docker}
	I0819 18:26:03.159722   66229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:26:03.200632   66229 node_ready.go:35] waiting up to 6m0s for node "embed-certs-306581" to be "Ready" ...
	I0819 18:26:03.208989   66229 node_ready.go:49] node "embed-certs-306581" has status "Ready":"True"
	I0819 18:26:03.209020   66229 node_ready.go:38] duration metric: took 8.358821ms for node "embed-certs-306581" to be "Ready" ...
	I0819 18:26:03.209031   66229 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:26:03.215374   66229 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:03.293861   66229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:26:03.295078   66229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:26:03.362999   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 18:26:03.363021   66229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 18:26:03.455443   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 18:26:03.455471   66229 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 18:26:03.525137   66229 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:26:03.525167   66229 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 18:26:03.594219   66229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 18:26:03.707027   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:03.707054   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:03.707419   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:03.707510   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:03.707526   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:03.707540   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:03.707551   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:03.707815   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:03.707863   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:03.707866   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:03.731452   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:03.731476   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:03.731752   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:03.731766   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:03.731774   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.521921   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.521943   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.522255   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:04.522325   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.522338   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.522347   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.522369   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.522422   66229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227312769s)
	I0819 18:26:04.522461   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.522472   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.522548   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.522564   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.522574   66229 addons.go:475] Verifying addon metrics-server=true in "embed-certs-306581"
	I0819 18:26:04.523854   66229 main.go:141] libmachine: (embed-certs-306581) DBG | Closing plugin on server side
	I0819 18:26:04.523859   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.523882   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.523899   66229 main.go:141] libmachine: Making call to close driver server
	I0819 18:26:04.523911   66229 main.go:141] libmachine: (embed-certs-306581) Calling .Close
	I0819 18:26:04.524115   66229 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:26:04.524134   66229 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:26:04.525754   66229 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0819 18:26:04.527292   66229 addons.go:510] duration metric: took 1.611171518s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0819 18:26:05.222505   66229 pod_ready.go:103] pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace has status "Ready":"False"
	I0819 18:26:06.222480   66229 pod_ready.go:93] pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.222511   66229 pod_ready.go:82] duration metric: took 3.00710581s for pod "coredns-6f6b679f8f-274qq" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.222523   66229 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-j764j" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.229629   66229 pod_ready.go:93] pod "coredns-6f6b679f8f-j764j" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.229653   66229 pod_ready.go:82] duration metric: took 7.122956ms for pod "coredns-6f6b679f8f-j764j" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.229663   66229 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.234474   66229 pod_ready.go:93] pod "etcd-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.234497   66229 pod_ready.go:82] duration metric: took 4.828007ms for pod "etcd-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.234510   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.239097   66229 pod_ready.go:93] pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.239114   66229 pod_ready.go:82] duration metric: took 4.596493ms for pod "kube-apiserver-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.239123   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.745125   66229 pod_ready.go:93] pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:06.745148   66229 pod_ready.go:82] duration metric: took 506.019468ms for pod "kube-controller-manager-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:06.745160   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-df5kf" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.019557   66229 pod_ready.go:93] pod "kube-proxy-df5kf" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:07.019594   66229 pod_ready.go:82] duration metric: took 274.427237ms for pod "kube-proxy-df5kf" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.019608   66229 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.418650   66229 pod_ready.go:93] pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace has status "Ready":"True"
	I0819 18:26:07.418675   66229 pod_ready.go:82] duration metric: took 399.060317ms for pod "kube-scheduler-embed-certs-306581" in "kube-system" namespace to be "Ready" ...
	I0819 18:26:07.418683   66229 pod_ready.go:39] duration metric: took 4.209640554s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:26:07.418696   66229 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:26:07.418742   66229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:26:07.434205   66229 api_server.go:72] duration metric: took 4.518122629s to wait for apiserver process to appear ...
	I0819 18:26:07.434229   66229 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:26:07.434245   66229 api_server.go:253] Checking apiserver healthz at https://192.168.72.181:8443/healthz ...
	I0819 18:26:07.438540   66229 api_server.go:279] https://192.168.72.181:8443/healthz returned 200:
	ok
	I0819 18:26:07.439633   66229 api_server.go:141] control plane version: v1.31.0
	I0819 18:26:07.439654   66229 api_server.go:131] duration metric: took 5.418424ms to wait for apiserver health ...
	I0819 18:26:07.439664   66229 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:26:07.622538   66229 system_pods.go:59] 9 kube-system pods found
	I0819 18:26:07.622567   66229 system_pods.go:61] "coredns-6f6b679f8f-274qq" [af408da7-683b-4730-b836-a5ae446e84d4] Running
	I0819 18:26:07.622575   66229 system_pods.go:61] "coredns-6f6b679f8f-j764j" [726e772d-dd20-4427-b8b2-40422b5be1ef] Running
	I0819 18:26:07.622580   66229 system_pods.go:61] "etcd-embed-certs-306581" [291235bc-9e42-4982-93c4-d77a0116a9ed] Running
	I0819 18:26:07.622583   66229 system_pods.go:61] "kube-apiserver-embed-certs-306581" [2068ba5f-ea2d-4b99-87e4-2c9d16861cd4] Running
	I0819 18:26:07.622587   66229 system_pods.go:61] "kube-controller-manager-embed-certs-306581" [057adac9-1819-4c28-8bdd-4b95cf4dd33f] Running
	I0819 18:26:07.622590   66229 system_pods.go:61] "kube-proxy-df5kf" [0f004f8f-d49f-468e-acac-a7d691c9cdba] Running
	I0819 18:26:07.622594   66229 system_pods.go:61] "kube-scheduler-embed-certs-306581" [58a0610a-0718-4151-8e0b-bf9dd0e7864a] Running
	I0819 18:26:07.622600   66229 system_pods.go:61] "metrics-server-6867b74b74-j8qbw" [6c7ec046-01e2-4903-9937-c79aabc81bb2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:26:07.622604   66229 system_pods.go:61] "storage-provisioner" [26d63f30-45fd-48f4-973e-6a72cf931b9d] Running
	I0819 18:26:07.622611   66229 system_pods.go:74] duration metric: took 182.941942ms to wait for pod list to return data ...
	I0819 18:26:07.622619   66229 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:26:07.820899   66229 default_sa.go:45] found service account: "default"
	I0819 18:26:07.820924   66229 default_sa.go:55] duration metric: took 198.300082ms for default service account to be created ...
	I0819 18:26:07.820934   66229 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:26:08.021777   66229 system_pods.go:86] 9 kube-system pods found
	I0819 18:26:08.021803   66229 system_pods.go:89] "coredns-6f6b679f8f-274qq" [af408da7-683b-4730-b836-a5ae446e84d4] Running
	I0819 18:26:08.021809   66229 system_pods.go:89] "coredns-6f6b679f8f-j764j" [726e772d-dd20-4427-b8b2-40422b5be1ef] Running
	I0819 18:26:08.021813   66229 system_pods.go:89] "etcd-embed-certs-306581" [291235bc-9e42-4982-93c4-d77a0116a9ed] Running
	I0819 18:26:08.021817   66229 system_pods.go:89] "kube-apiserver-embed-certs-306581" [2068ba5f-ea2d-4b99-87e4-2c9d16861cd4] Running
	I0819 18:26:08.021820   66229 system_pods.go:89] "kube-controller-manager-embed-certs-306581" [057adac9-1819-4c28-8bdd-4b95cf4dd33f] Running
	I0819 18:26:08.021825   66229 system_pods.go:89] "kube-proxy-df5kf" [0f004f8f-d49f-468e-acac-a7d691c9cdba] Running
	I0819 18:26:08.021829   66229 system_pods.go:89] "kube-scheduler-embed-certs-306581" [58a0610a-0718-4151-8e0b-bf9dd0e7864a] Running
	I0819 18:26:08.021836   66229 system_pods.go:89] "metrics-server-6867b74b74-j8qbw" [6c7ec046-01e2-4903-9937-c79aabc81bb2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 18:26:08.021840   66229 system_pods.go:89] "storage-provisioner" [26d63f30-45fd-48f4-973e-6a72cf931b9d] Running
	I0819 18:26:08.021847   66229 system_pods.go:126] duration metric: took 200.908452ms to wait for k8s-apps to be running ...
	I0819 18:26:08.021853   66229 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:26:08.021896   66229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:26:08.035873   66229 system_svc.go:56] duration metric: took 14.008336ms WaitForService to wait for kubelet
	I0819 18:26:08.035902   66229 kubeadm.go:582] duration metric: took 5.119824696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:26:08.035928   66229 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:26:08.219981   66229 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:26:08.220005   66229 node_conditions.go:123] node cpu capacity is 2
	I0819 18:26:08.220016   66229 node_conditions.go:105] duration metric: took 184.083094ms to run NodePressure ...
	I0819 18:26:08.220025   66229 start.go:241] waiting for startup goroutines ...
	I0819 18:26:08.220032   66229 start.go:246] waiting for cluster config update ...
	I0819 18:26:08.220041   66229 start.go:255] writing updated cluster config ...
	I0819 18:26:08.220295   66229 ssh_runner.go:195] Run: rm -f paused
	I0819 18:26:08.267438   66229 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:26:08.269435   66229 out.go:177] * Done! kubectl is now configured to use "embed-certs-306581" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.638021800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092323637983800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6126ce26-d4f9-4cba-93d7-624d87de1fe4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.638930489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17f00e42-6a37-402f-bc8b-4181508d3335 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.639075411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17f00e42-6a37-402f-bc8b-4181508d3335 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.639160895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=17f00e42-6a37-402f-bc8b-4181508d3335 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.670852638Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02bf00b6-2df0-4f59-8a80-a1eafd55d739 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.670934696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02bf00b6-2df0-4f59-8a80-a1eafd55d739 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.672180757Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32c08e1a-fc7f-4102-b609-2daf2812b163 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.672600175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092323672576708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32c08e1a-fc7f-4102-b609-2daf2812b163 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.673454931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb1925bf-a590-4daf-a07b-709f077e0efc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.673508460Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb1925bf-a590-4daf-a07b-709f077e0efc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.673537737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cb1925bf-a590-4daf-a07b-709f077e0efc name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.707176993Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53dbc3f3-dfeb-4670-ab11-c160b6e64a69 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.707245112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53dbc3f3-dfeb-4670-ab11-c160b6e64a69 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.708660671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3d15d61-1c0d-4b5d-92da-1554af2c690e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.709093264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092323709063128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3d15d61-1c0d-4b5d-92da-1554af2c690e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.709775009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf276521-b3f2-4a93-9221-f927b51a9296 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.709835816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf276521-b3f2-4a93-9221-f927b51a9296 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.709869566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bf276521-b3f2-4a93-9221-f927b51a9296 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.740650801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a78e2fc0-38fa-4dfd-9156-352751a9d3a5 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.740723197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a78e2fc0-38fa-4dfd-9156-352751a9d3a5 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.741979461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3526bcfd-86ee-4533-94e2-9c4ddb4f0245 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.742339988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092323742316472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3526bcfd-86ee-4533-94e2-9c4ddb4f0245 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.742840926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eeb9eaf7-4f1c-4815-bdd9-7a119d1620a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.742894401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eeb9eaf7-4f1c-4815-bdd9-7a119d1620a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:32:03 old-k8s-version-079123 crio[645]: time="2024-08-19 18:32:03.742923102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=eeb9eaf7-4f1c-4815-bdd9-7a119d1620a9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug19 18:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050661] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037961] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.796045] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.906924] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.551301] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.289032] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.062660] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073191] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.227214] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.148485] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.242620] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[Aug19 18:12] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.058214] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.166270] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	[ +11.850102] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 18:16] systemd-fstab-generator[5127]: Ignoring "noauto" option for root device
	[Aug19 18:18] systemd-fstab-generator[5400]: Ignoring "noauto" option for root device
	[  +0.061151] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:32:03 up 20 min,  0 users,  load average: 0.05, 0.06, 0.04
	Linux old-k8s-version-079123 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00015cfc0, 0xc000782a20, 0xc000782a20, 0x0, 0x0)
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000917340)
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]: goroutine 125 [runnable]:
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0001125a0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001cb140, 0x0, 0x0)
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000917340)
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6928]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 19 18:31:59 old-k8s-version-079123 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 19 18:31:59 old-k8s-version-079123 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 19 18:31:59 old-k8s-version-079123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 144.
	Aug 19 18:31:59 old-k8s-version-079123 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 19 18:31:59 old-k8s-version-079123 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6938]: I0819 18:31:59.991349    6938 server.go:416] Version: v1.20.0
	Aug 19 18:31:59 old-k8s-version-079123 kubelet[6938]: I0819 18:31:59.992882    6938 server.go:837] Client rotation is on, will bootstrap in background
	Aug 19 18:32:00 old-k8s-version-079123 kubelet[6938]: I0819 18:32:00.002878    6938 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 19 18:32:00 old-k8s-version-079123 kubelet[6938]: I0819 18:32:00.004461    6938 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 19 18:32:00 old-k8s-version-079123 kubelet[6938]: W0819 18:32:00.004617    6938 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079123 -n old-k8s-version-079123
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 2 (224.425072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-079123" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (174.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (387.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0819 18:35:11.198953   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-306581 -n embed-certs-306581
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-19 18:41:36.227987822 +0000 UTC m=+6557.932155702
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-306581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-306581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.554µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-306581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-306581 -n embed-certs-306581
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-306581 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-306581 logs -n 25: (1.124159454s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-321572 sudo iptables                       | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo cat                            | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo cat                            | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo cat                            | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo docker                         | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo cat                            | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo cat                            | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo cat                            | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo cat                            | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo                                | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo find                           | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-321572 sudo crio                           | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-321572                                     | bridge-321572 | jenkins | v1.33.1 | 19 Aug 24 18:37 UTC | 19 Aug 24 18:37 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:35:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:35:39.121008   79072 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:35:39.121520   79072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:35:39.121539   79072 out.go:358] Setting ErrFile to fd 2...
	I0819 18:35:39.121546   79072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:35:39.121726   79072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 18:35:39.122349   79072 out.go:352] Setting JSON to false
	I0819 18:35:39.123435   79072 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8284,"bootTime":1724084255,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:35:39.123497   79072 start.go:139] virtualization: kvm guest
	I0819 18:35:39.125783   79072 out.go:177] * [bridge-321572] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:35:39.127068   79072 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:35:39.127075   79072 notify.go:220] Checking for updates...
	I0819 18:35:39.129589   79072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:35:39.130706   79072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:35:39.132078   79072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:35:39.133328   79072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:35:39.134515   79072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:35:39.136054   79072 config.go:182] Loaded profile config "embed-certs-306581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:35:39.136142   79072 config.go:182] Loaded profile config "enable-default-cni-321572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:35:39.136225   79072 config.go:182] Loaded profile config "flannel-321572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:35:39.136303   79072 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:35:39.175594   79072 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 18:35:39.176616   79072 start.go:297] selected driver: kvm2
	I0819 18:35:39.176632   79072 start.go:901] validating driver "kvm2" against <nil>
	I0819 18:35:39.176644   79072 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:35:39.177368   79072 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:35:39.177473   79072 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 18:35:39.193703   79072 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 18:35:39.193760   79072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 18:35:39.193994   79072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:35:39.194031   79072 cni.go:84] Creating CNI manager for "bridge"
	I0819 18:35:39.194039   79072 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 18:35:39.194114   79072 start.go:340] cluster config:
	{Name:bridge-321572 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:35:39.194256   79072 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:35:39.196672   79072 out.go:177] * Starting "bridge-321572" primary control-plane node in "bridge-321572" cluster
	I0819 18:35:39.197685   79072 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:35:39.197769   79072 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:35:39.197789   79072 cache.go:56] Caching tarball of preloaded images
	I0819 18:35:39.197886   79072 preload.go:172] Found /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:35:39.197901   79072 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:35:39.198032   79072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/config.json ...
	I0819 18:35:39.198057   79072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/config.json: {Name:mkf643c85d88a1178ac14dbea73e5485194b334e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:35:39.198227   79072 start.go:360] acquireMachinesLock for bridge-321572: {Name:mkdbb17473bc80293d28b9c40ee62663f0a485a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 18:35:39.198269   79072 start.go:364] duration metric: took 27.451µs to acquireMachinesLock for "bridge-321572"
	I0819 18:35:39.198291   79072 start.go:93] Provisioning new machine with config: &{Name:bridge-321572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:bridge-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:35:39.198361   79072 start.go:125] createHost starting for "" (driver="kvm2")
	I0819 18:35:38.769019   76968 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 18:35:38.775417   76968 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 18:35:38.775435   76968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0819 18:35:38.795151   76968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 18:35:39.238599   76968 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:35:39.238765   76968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:35:39.238854   76968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-321572 minikube.k8s.io/updated_at=2024_08_19T18_35_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=flannel-321572 minikube.k8s.io/primary=true
	I0819 18:35:39.291575   76968 ops.go:34] apiserver oom_adj: -16
	I0819 18:35:39.479331   76968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:35:39.979736   76968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:35:40.480254   76968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:35:40.979460   76968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:35:41.480201   76968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:35:41.979853   76968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:35:42.479449   76968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:35:42.575162   76968 kubeadm.go:1113] duration metric: took 3.336437537s to wait for elevateKubeSystemPrivileges
	I0819 18:35:42.575212   76968 kubeadm.go:394] duration metric: took 14.730778568s to StartCluster
	I0819 18:35:42.575236   76968 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:35:42.575313   76968 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:35:42.577245   76968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:35:42.577543   76968 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.93 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:35:42.577681   76968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 18:35:42.577740   76968 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:35:42.577824   76968 addons.go:69] Setting storage-provisioner=true in profile "flannel-321572"
	I0819 18:35:42.577841   76968 config.go:182] Loaded profile config "flannel-321572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:35:42.577854   76968 addons.go:234] Setting addon storage-provisioner=true in "flannel-321572"
	I0819 18:35:42.577885   76968 host.go:66] Checking if "flannel-321572" exists ...
	I0819 18:35:42.577895   76968 addons.go:69] Setting default-storageclass=true in profile "flannel-321572"
	I0819 18:35:42.577940   76968 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-321572"
	I0819 18:35:42.578333   76968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:35:42.578352   76968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:35:42.578373   76968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:35:42.578395   76968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:35:42.579255   76968 out.go:177] * Verifying Kubernetes components...
	I0819 18:35:42.580601   76968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:35:42.597262   76968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I0819 18:35:42.597814   76968 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:35:42.598355   76968 main.go:141] libmachine: Using API Version  1
	I0819 18:35:42.598374   76968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:35:42.599787   76968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0819 18:35:42.600277   76968 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:35:42.600937   76968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:35:42.600963   76968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:35:42.601173   76968 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:35:42.601682   76968 main.go:141] libmachine: Using API Version  1
	I0819 18:35:42.601706   76968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:35:42.602085   76968 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:35:42.602293   76968 main.go:141] libmachine: (flannel-321572) Calling .GetState
	I0819 18:35:42.606203   76968 addons.go:234] Setting addon default-storageclass=true in "flannel-321572"
	I0819 18:35:42.606246   76968 host.go:66] Checking if "flannel-321572" exists ...
	I0819 18:35:42.606593   76968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:35:42.606637   76968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:35:42.622184   76968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41865
	I0819 18:35:42.622633   76968 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:35:42.622889   76968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I0819 18:35:42.623257   76968 main.go:141] libmachine: Using API Version  1
	I0819 18:35:42.623276   76968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:35:42.623489   76968 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:35:42.623662   76968 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:35:42.623842   76968 main.go:141] libmachine: (flannel-321572) Calling .GetState
	I0819 18:35:42.624799   76968 main.go:141] libmachine: Using API Version  1
	I0819 18:35:42.624815   76968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:35:42.625312   76968 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:35:42.625831   76968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:35:42.625846   76968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:35:42.626046   76968 main.go:141] libmachine: (flannel-321572) Calling .DriverName
	I0819 18:35:42.628083   76968 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:35:42.629325   76968 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:35:42.629343   76968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:35:42.629371   76968 main.go:141] libmachine: (flannel-321572) Calling .GetSSHHostname
	I0819 18:35:42.632481   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:35:42.633127   76968 main.go:141] libmachine: (flannel-321572) Calling .GetSSHPort
	I0819 18:35:42.633184   76968 main.go:141] libmachine: (flannel-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:6d:67", ip: ""} in network mk-flannel-321572: {Iface:virbr3 ExpiryTime:2024-08-19 19:35:07 +0000 UTC Type:0 Mac:52:54:00:5f:6d:67 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:flannel-321572 Clientid:01:52:54:00:5f:6d:67}
	I0819 18:35:42.633194   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined IP address 192.168.61.93 and MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:35:42.633281   76968 main.go:141] libmachine: (flannel-321572) Calling .GetSSHKeyPath
	I0819 18:35:42.633412   76968 main.go:141] libmachine: (flannel-321572) Calling .GetSSHUsername
	I0819 18:35:42.633568   76968 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/flannel-321572/id_rsa Username:docker}
	I0819 18:35:42.643381   76968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
	I0819 18:35:42.643734   76968 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:35:42.644267   76968 main.go:141] libmachine: Using API Version  1
	I0819 18:35:42.644286   76968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:35:42.644685   76968 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:35:42.644894   76968 main.go:141] libmachine: (flannel-321572) Calling .GetState
	I0819 18:35:42.646237   76968 main.go:141] libmachine: (flannel-321572) Calling .DriverName
	I0819 18:35:42.646484   76968 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:35:42.646498   76968 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:35:42.646514   76968 main.go:141] libmachine: (flannel-321572) Calling .GetSSHHostname
	I0819 18:35:42.649451   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:35:42.649869   76968 main.go:141] libmachine: (flannel-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:6d:67", ip: ""} in network mk-flannel-321572: {Iface:virbr3 ExpiryTime:2024-08-19 19:35:07 +0000 UTC Type:0 Mac:52:54:00:5f:6d:67 Iaid: IPaddr:192.168.61.93 Prefix:24 Hostname:flannel-321572 Clientid:01:52:54:00:5f:6d:67}
	I0819 18:35:42.649886   76968 main.go:141] libmachine: (flannel-321572) DBG | domain flannel-321572 has defined IP address 192.168.61.93 and MAC address 52:54:00:5f:6d:67 in network mk-flannel-321572
	I0819 18:35:42.650138   76968 main.go:141] libmachine: (flannel-321572) Calling .GetSSHPort
	I0819 18:35:42.650304   76968 main.go:141] libmachine: (flannel-321572) Calling .GetSSHKeyPath
	I0819 18:35:42.650452   76968 main.go:141] libmachine: (flannel-321572) Calling .GetSSHUsername
	I0819 18:35:42.650576   76968 sshutil.go:53] new ssh client: &{IP:192.168.61.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/flannel-321572/id_rsa Username:docker}
	I0819 18:35:42.794344   76968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 18:35:42.831698   76968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:35:43.043234   76968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:35:43.081743   76968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:35:43.404261   76968 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0819 18:35:43.406081   76968 node_ready.go:35] waiting up to 15m0s for node "flannel-321572" to be "Ready" ...
	I0819 18:35:43.915678   76968 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-321572" context rescaled to 1 replicas
	I0819 18:35:43.958251   76968 main.go:141] libmachine: Making call to close driver server
	I0819 18:35:43.958280   76968 main.go:141] libmachine: (flannel-321572) Calling .Close
	I0819 18:35:43.958732   76968 main.go:141] libmachine: (flannel-321572) DBG | Closing plugin on server side
	I0819 18:35:43.958827   76968 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:35:43.958840   76968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:35:43.958840   76968 main.go:141] libmachine: Making call to close driver server
	I0819 18:35:43.958851   76968 main.go:141] libmachine: Making call to close driver server
	I0819 18:35:43.958859   76968 main.go:141] libmachine: (flannel-321572) Calling .Close
	I0819 18:35:43.958852   76968 main.go:141] libmachine: (flannel-321572) Calling .Close
	I0819 18:35:43.959127   76968 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:35:43.959147   76968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:35:43.960406   76968 main.go:141] libmachine: (flannel-321572) DBG | Closing plugin on server side
	I0819 18:35:43.960485   76968 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:35:43.960519   76968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:35:43.960537   76968 main.go:141] libmachine: Making call to close driver server
	I0819 18:35:43.960558   76968 main.go:141] libmachine: (flannel-321572) Calling .Close
	I0819 18:35:43.960831   76968 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:35:43.960847   76968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:35:43.972781   76968 main.go:141] libmachine: Making call to close driver server
	I0819 18:35:43.972809   76968 main.go:141] libmachine: (flannel-321572) Calling .Close
	I0819 18:35:43.973248   76968 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:35:43.973264   76968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:35:43.975297   76968 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0819 18:35:39.199863   79072 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 18:35:39.200030   79072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:35:39.200071   79072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:35:39.216484   79072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0819 18:35:39.217063   79072 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:35:39.217664   79072 main.go:141] libmachine: Using API Version  1
	I0819 18:35:39.217684   79072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:35:39.218071   79072 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:35:39.218282   79072 main.go:141] libmachine: (bridge-321572) Calling .GetMachineName
	I0819 18:35:39.218492   79072 main.go:141] libmachine: (bridge-321572) Calling .DriverName
	I0819 18:35:39.218685   79072 start.go:159] libmachine.API.Create for "bridge-321572" (driver="kvm2")
	I0819 18:35:39.218720   79072 client.go:168] LocalClient.Create starting
	I0819 18:35:39.218759   79072 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem
	I0819 18:35:39.218809   79072 main.go:141] libmachine: Decoding PEM data...
	I0819 18:35:39.218835   79072 main.go:141] libmachine: Parsing certificate...
	I0819 18:35:39.218931   79072 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem
	I0819 18:35:39.218953   79072 main.go:141] libmachine: Decoding PEM data...
	I0819 18:35:39.218968   79072 main.go:141] libmachine: Parsing certificate...
	I0819 18:35:39.218990   79072 main.go:141] libmachine: Running pre-create checks...
	I0819 18:35:39.218997   79072 main.go:141] libmachine: (bridge-321572) Calling .PreCreateCheck
	I0819 18:35:39.219499   79072 main.go:141] libmachine: (bridge-321572) Calling .GetConfigRaw
	I0819 18:35:39.219897   79072 main.go:141] libmachine: Creating machine...
	I0819 18:35:39.219911   79072 main.go:141] libmachine: (bridge-321572) Calling .Create
	I0819 18:35:39.220065   79072 main.go:141] libmachine: (bridge-321572) Creating KVM machine...
	I0819 18:35:39.221514   79072 main.go:141] libmachine: (bridge-321572) DBG | found existing default KVM network
	I0819 18:35:39.223685   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:39.223524   79095 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e100}
	I0819 18:35:39.223707   79072 main.go:141] libmachine: (bridge-321572) DBG | created network xml: 
	I0819 18:35:39.223724   79072 main.go:141] libmachine: (bridge-321572) DBG | <network>
	I0819 18:35:39.223740   79072 main.go:141] libmachine: (bridge-321572) DBG |   <name>mk-bridge-321572</name>
	I0819 18:35:39.223751   79072 main.go:141] libmachine: (bridge-321572) DBG |   <dns enable='no'/>
	I0819 18:35:39.223761   79072 main.go:141] libmachine: (bridge-321572) DBG |   
	I0819 18:35:39.223772   79072 main.go:141] libmachine: (bridge-321572) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0819 18:35:39.223782   79072 main.go:141] libmachine: (bridge-321572) DBG |     <dhcp>
	I0819 18:35:39.223792   79072 main.go:141] libmachine: (bridge-321572) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0819 18:35:39.223798   79072 main.go:141] libmachine: (bridge-321572) DBG |     </dhcp>
	I0819 18:35:39.223810   79072 main.go:141] libmachine: (bridge-321572) DBG |   </ip>
	I0819 18:35:39.223816   79072 main.go:141] libmachine: (bridge-321572) DBG |   
	I0819 18:35:39.223878   79072 main.go:141] libmachine: (bridge-321572) DBG | </network>
	I0819 18:35:39.223903   79072 main.go:141] libmachine: (bridge-321572) DBG | 
	I0819 18:35:39.228695   79072 main.go:141] libmachine: (bridge-321572) DBG | trying to create private KVM network mk-bridge-321572 192.168.39.0/24...
	I0819 18:35:39.307781   79072 main.go:141] libmachine: (bridge-321572) DBG | private KVM network mk-bridge-321572 192.168.39.0/24 created
	I0819 18:35:39.307809   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:39.307747   79095 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:35:39.307891   79072 main.go:141] libmachine: (bridge-321572) Setting up store path in /home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572 ...
	I0819 18:35:39.307928   79072 main.go:141] libmachine: (bridge-321572) Building disk image from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 18:35:39.307964   79072 main.go:141] libmachine: (bridge-321572) Downloading /home/jenkins/minikube-integration/19478-10654/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0819 18:35:39.578310   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:39.578132   79095 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/id_rsa...
	I0819 18:35:39.828976   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:39.828841   79095 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/bridge-321572.rawdisk...
	I0819 18:35:39.829010   79072 main.go:141] libmachine: (bridge-321572) DBG | Writing magic tar header
	I0819 18:35:39.829032   79072 main.go:141] libmachine: (bridge-321572) DBG | Writing SSH key tar header
	I0819 18:35:39.829049   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:39.828956   79095 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572 ...
	I0819 18:35:39.829113   79072 main.go:141] libmachine: (bridge-321572) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572
	I0819 18:35:39.829153   79072 main.go:141] libmachine: (bridge-321572) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube/machines
	I0819 18:35:39.829175   79072 main.go:141] libmachine: (bridge-321572) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572 (perms=drwx------)
	I0819 18:35:39.829187   79072 main.go:141] libmachine: (bridge-321572) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 18:35:39.829197   79072 main.go:141] libmachine: (bridge-321572) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19478-10654
	I0819 18:35:39.829205   79072 main.go:141] libmachine: (bridge-321572) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0819 18:35:39.829212   79072 main.go:141] libmachine: (bridge-321572) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube/machines (perms=drwxr-xr-x)
	I0819 18:35:39.829220   79072 main.go:141] libmachine: (bridge-321572) DBG | Checking permissions on dir: /home/jenkins
	I0819 18:35:39.829227   79072 main.go:141] libmachine: (bridge-321572) DBG | Checking permissions on dir: /home
	I0819 18:35:39.829234   79072 main.go:141] libmachine: (bridge-321572) DBG | Skipping /home - not owner
	I0819 18:35:39.829261   79072 main.go:141] libmachine: (bridge-321572) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654/.minikube (perms=drwxr-xr-x)
	I0819 18:35:39.829288   79072 main.go:141] libmachine: (bridge-321572) Setting executable bit set on /home/jenkins/minikube-integration/19478-10654 (perms=drwxrwxr-x)
	I0819 18:35:39.829306   79072 main.go:141] libmachine: (bridge-321572) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0819 18:35:39.829319   79072 main.go:141] libmachine: (bridge-321572) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0819 18:35:39.829332   79072 main.go:141] libmachine: (bridge-321572) Creating domain...
	I0819 18:35:39.830378   79072 main.go:141] libmachine: (bridge-321572) define libvirt domain using xml: 
	I0819 18:35:39.830399   79072 main.go:141] libmachine: (bridge-321572) <domain type='kvm'>
	I0819 18:35:39.830410   79072 main.go:141] libmachine: (bridge-321572)   <name>bridge-321572</name>
	I0819 18:35:39.830418   79072 main.go:141] libmachine: (bridge-321572)   <memory unit='MiB'>3072</memory>
	I0819 18:35:39.830427   79072 main.go:141] libmachine: (bridge-321572)   <vcpu>2</vcpu>
	I0819 18:35:39.830435   79072 main.go:141] libmachine: (bridge-321572)   <features>
	I0819 18:35:39.830447   79072 main.go:141] libmachine: (bridge-321572)     <acpi/>
	I0819 18:35:39.830453   79072 main.go:141] libmachine: (bridge-321572)     <apic/>
	I0819 18:35:39.830462   79072 main.go:141] libmachine: (bridge-321572)     <pae/>
	I0819 18:35:39.830478   79072 main.go:141] libmachine: (bridge-321572)     
	I0819 18:35:39.830496   79072 main.go:141] libmachine: (bridge-321572)   </features>
	I0819 18:35:39.830516   79072 main.go:141] libmachine: (bridge-321572)   <cpu mode='host-passthrough'>
	I0819 18:35:39.830530   79072 main.go:141] libmachine: (bridge-321572)   
	I0819 18:35:39.830540   79072 main.go:141] libmachine: (bridge-321572)   </cpu>
	I0819 18:35:39.830551   79072 main.go:141] libmachine: (bridge-321572)   <os>
	I0819 18:35:39.830558   79072 main.go:141] libmachine: (bridge-321572)     <type>hvm</type>
	I0819 18:35:39.830564   79072 main.go:141] libmachine: (bridge-321572)     <boot dev='cdrom'/>
	I0819 18:35:39.830571   79072 main.go:141] libmachine: (bridge-321572)     <boot dev='hd'/>
	I0819 18:35:39.830577   79072 main.go:141] libmachine: (bridge-321572)     <bootmenu enable='no'/>
	I0819 18:35:39.830587   79072 main.go:141] libmachine: (bridge-321572)   </os>
	I0819 18:35:39.830607   79072 main.go:141] libmachine: (bridge-321572)   <devices>
	I0819 18:35:39.830620   79072 main.go:141] libmachine: (bridge-321572)     <disk type='file' device='cdrom'>
	I0819 18:35:39.830642   79072 main.go:141] libmachine: (bridge-321572)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/boot2docker.iso'/>
	I0819 18:35:39.830657   79072 main.go:141] libmachine: (bridge-321572)       <target dev='hdc' bus='scsi'/>
	I0819 18:35:39.830671   79072 main.go:141] libmachine: (bridge-321572)       <readonly/>
	I0819 18:35:39.830678   79072 main.go:141] libmachine: (bridge-321572)     </disk>
	I0819 18:35:39.830690   79072 main.go:141] libmachine: (bridge-321572)     <disk type='file' device='disk'>
	I0819 18:35:39.830703   79072 main.go:141] libmachine: (bridge-321572)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0819 18:35:39.830718   79072 main.go:141] libmachine: (bridge-321572)       <source file='/home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/bridge-321572.rawdisk'/>
	I0819 18:35:39.830733   79072 main.go:141] libmachine: (bridge-321572)       <target dev='hda' bus='virtio'/>
	I0819 18:35:39.830744   79072 main.go:141] libmachine: (bridge-321572)     </disk>
	I0819 18:35:39.830755   79072 main.go:141] libmachine: (bridge-321572)     <interface type='network'>
	I0819 18:35:39.830763   79072 main.go:141] libmachine: (bridge-321572)       <source network='mk-bridge-321572'/>
	I0819 18:35:39.830773   79072 main.go:141] libmachine: (bridge-321572)       <model type='virtio'/>
	I0819 18:35:39.830785   79072 main.go:141] libmachine: (bridge-321572)     </interface>
	I0819 18:35:39.830796   79072 main.go:141] libmachine: (bridge-321572)     <interface type='network'>
	I0819 18:35:39.830807   79072 main.go:141] libmachine: (bridge-321572)       <source network='default'/>
	I0819 18:35:39.830818   79072 main.go:141] libmachine: (bridge-321572)       <model type='virtio'/>
	I0819 18:35:39.830829   79072 main.go:141] libmachine: (bridge-321572)     </interface>
	I0819 18:35:39.830841   79072 main.go:141] libmachine: (bridge-321572)     <serial type='pty'>
	I0819 18:35:39.830857   79072 main.go:141] libmachine: (bridge-321572)       <target port='0'/>
	I0819 18:35:39.830869   79072 main.go:141] libmachine: (bridge-321572)     </serial>
	I0819 18:35:39.830879   79072 main.go:141] libmachine: (bridge-321572)     <console type='pty'>
	I0819 18:35:39.830896   79072 main.go:141] libmachine: (bridge-321572)       <target type='serial' port='0'/>
	I0819 18:35:39.830911   79072 main.go:141] libmachine: (bridge-321572)     </console>
	I0819 18:35:39.830923   79072 main.go:141] libmachine: (bridge-321572)     <rng model='virtio'>
	I0819 18:35:39.830944   79072 main.go:141] libmachine: (bridge-321572)       <backend model='random'>/dev/random</backend>
	I0819 18:35:39.830956   79072 main.go:141] libmachine: (bridge-321572)     </rng>
	I0819 18:35:39.830966   79072 main.go:141] libmachine: (bridge-321572)     
	I0819 18:35:39.830974   79072 main.go:141] libmachine: (bridge-321572)     
	I0819 18:35:39.830984   79072 main.go:141] libmachine: (bridge-321572)   </devices>
	I0819 18:35:39.830995   79072 main.go:141] libmachine: (bridge-321572) </domain>
	I0819 18:35:39.831004   79072 main.go:141] libmachine: (bridge-321572) 
	I0819 18:35:39.835061   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:54:3d:64 in network default
	I0819 18:35:39.835694   79072 main.go:141] libmachine: (bridge-321572) Ensuring networks are active...
	I0819 18:35:39.835719   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:39.836431   79072 main.go:141] libmachine: (bridge-321572) Ensuring network default is active
	I0819 18:35:39.836770   79072 main.go:141] libmachine: (bridge-321572) Ensuring network mk-bridge-321572 is active
	I0819 18:35:39.837306   79072 main.go:141] libmachine: (bridge-321572) Getting domain xml...
	I0819 18:35:39.838012   79072 main.go:141] libmachine: (bridge-321572) Creating domain...
	I0819 18:35:41.101166   79072 main.go:141] libmachine: (bridge-321572) Waiting to get IP...
	I0819 18:35:41.101993   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:41.102447   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:41.102505   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:41.102445   79095 retry.go:31] will retry after 262.716921ms: waiting for machine to come up
	I0819 18:35:41.366922   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:41.367454   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:41.367480   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:41.367420   79095 retry.go:31] will retry after 360.730313ms: waiting for machine to come up
	I0819 18:35:41.729905   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:41.730381   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:41.730423   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:41.730371   79095 retry.go:31] will retry after 475.606579ms: waiting for machine to come up
	I0819 18:35:42.208139   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:42.209654   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:42.209681   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:42.209615   79095 retry.go:31] will retry after 573.178958ms: waiting for machine to come up
	I0819 18:35:42.784041   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:42.784657   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:42.784679   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:42.784619   79095 retry.go:31] will retry after 565.402583ms: waiting for machine to come up
	I0819 18:35:43.351440   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:43.352036   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:43.352087   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:43.352029   79095 retry.go:31] will retry after 716.069578ms: waiting for machine to come up
	I0819 18:35:44.069960   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:44.070587   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:44.070618   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:44.070515   79095 retry.go:31] will retry after 1.157840689s: waiting for machine to come up
	I0819 18:35:43.976528   76968 addons.go:510] duration metric: took 1.398797901s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0819 18:35:45.409995   76968 node_ready.go:53] node "flannel-321572" has status "Ready":"False"
	I0819 18:35:45.230104   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:45.230588   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:45.230611   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:45.230554   79095 retry.go:31] will retry after 906.160084ms: waiting for machine to come up
	I0819 18:35:46.137970   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:46.138508   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:46.138531   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:46.138453   79095 retry.go:31] will retry after 1.705937118s: waiting for machine to come up
	I0819 18:35:47.845588   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:47.846029   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:47.846055   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:47.845992   79095 retry.go:31] will retry after 1.781843933s: waiting for machine to come up
	I0819 18:35:47.910637   76968 node_ready.go:53] node "flannel-321572" has status "Ready":"False"
	I0819 18:35:50.415186   76968 node_ready.go:53] node "flannel-321572" has status "Ready":"False"
	I0819 18:35:49.629173   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:49.629690   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:49.629717   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:49.629653   79095 retry.go:31] will retry after 1.840172521s: waiting for machine to come up
	I0819 18:35:51.623953   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:51.624685   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:51.624708   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:51.624654   79095 retry.go:31] will retry after 2.882616277s: waiting for machine to come up
	I0819 18:35:52.424312   76968 node_ready.go:49] node "flannel-321572" has status "Ready":"True"
	I0819 18:35:52.424342   76968 node_ready.go:38] duration metric: took 9.018229465s for node "flannel-321572" to be "Ready" ...
	I0819 18:35:52.424352   76968 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:35:52.466138   76968 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-qkl7c" in "kube-system" namespace to be "Ready" ...
	I0819 18:35:54.472938   76968 pod_ready.go:103] pod "coredns-6f6b679f8f-qkl7c" in "kube-system" namespace has status "Ready":"False"
	I0819 18:35:54.509103   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:54.509653   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:54.509683   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:54.509571   79095 retry.go:31] will retry after 3.831208516s: waiting for machine to come up
	I0819 18:35:58.342007   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:35:58.342522   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find current IP address of domain bridge-321572 in network mk-bridge-321572
	I0819 18:35:58.342544   79072 main.go:141] libmachine: (bridge-321572) DBG | I0819 18:35:58.342486   79095 retry.go:31] will retry after 5.575867884s: waiting for machine to come up
	I0819 18:35:56.972992   76968 pod_ready.go:103] pod "coredns-6f6b679f8f-qkl7c" in "kube-system" namespace has status "Ready":"False"
	I0819 18:35:59.472234   76968 pod_ready.go:103] pod "coredns-6f6b679f8f-qkl7c" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:03.921868   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:03.922496   79072 main.go:141] libmachine: (bridge-321572) Found IP for machine: 192.168.39.54
	I0819 18:36:03.922517   79072 main.go:141] libmachine: (bridge-321572) Reserving static IP address...
	I0819 18:36:03.922530   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has current primary IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:03.923022   79072 main.go:141] libmachine: (bridge-321572) DBG | unable to find host DHCP lease matching {name: "bridge-321572", mac: "52:54:00:3e:2c:60", ip: "192.168.39.54"} in network mk-bridge-321572
	I0819 18:36:04.001599   79072 main.go:141] libmachine: (bridge-321572) DBG | Getting to WaitForSSH function...
	I0819 18:36:04.001634   79072 main.go:141] libmachine: (bridge-321572) Reserved static IP address: 192.168.39.54
	I0819 18:36:04.001676   79072 main.go:141] libmachine: (bridge-321572) Waiting for SSH to be available...
	I0819 18:36:04.004330   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.004811   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:04.004838   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.004993   79072 main.go:141] libmachine: (bridge-321572) DBG | Using SSH client type: external
	I0819 18:36:04.005017   79072 main.go:141] libmachine: (bridge-321572) DBG | Using SSH private key: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/id_rsa (-rw-------)
	I0819 18:36:04.005036   79072 main.go:141] libmachine: (bridge-321572) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0819 18:36:04.005042   79072 main.go:141] libmachine: (bridge-321572) DBG | About to run SSH command:
	I0819 18:36:04.005051   79072 main.go:141] libmachine: (bridge-321572) DBG | exit 0
	I0819 18:36:01.472318   76968 pod_ready.go:103] pod "coredns-6f6b679f8f-qkl7c" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:03.473300   76968 pod_ready.go:103] pod "coredns-6f6b679f8f-qkl7c" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:05.973266   76968 pod_ready.go:103] pod "coredns-6f6b679f8f-qkl7c" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:04.129016   79072 main.go:141] libmachine: (bridge-321572) DBG | SSH cmd err, output: <nil>: 
	I0819 18:36:04.129282   79072 main.go:141] libmachine: (bridge-321572) KVM machine creation complete!
	I0819 18:36:04.129656   79072 main.go:141] libmachine: (bridge-321572) Calling .GetConfigRaw
	I0819 18:36:04.130177   79072 main.go:141] libmachine: (bridge-321572) Calling .DriverName
	I0819 18:36:04.130378   79072 main.go:141] libmachine: (bridge-321572) Calling .DriverName
	I0819 18:36:04.130562   79072 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0819 18:36:04.130575   79072 main.go:141] libmachine: (bridge-321572) Calling .GetState
	I0819 18:36:04.131951   79072 main.go:141] libmachine: Detecting operating system of created instance...
	I0819 18:36:04.131969   79072 main.go:141] libmachine: Waiting for SSH to be available...
	I0819 18:36:04.131976   79072 main.go:141] libmachine: Getting to WaitForSSH function...
	I0819 18:36:04.131983   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:04.135124   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.135526   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:04.135557   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.135724   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:04.135901   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.136073   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.136213   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:04.136463   79072 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:04.136666   79072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0819 18:36:04.136676   79072 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0819 18:36:04.239931   79072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:36:04.239953   79072 main.go:141] libmachine: Detecting the provisioner...
	I0819 18:36:04.239963   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:04.243088   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.243511   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:04.243529   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.243714   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:04.243926   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.244144   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.244297   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:04.244462   79072 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:04.244619   79072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0819 18:36:04.244629   79072 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0819 18:36:04.349220   79072 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0819 18:36:04.349319   79072 main.go:141] libmachine: found compatible host: buildroot
	I0819 18:36:04.349334   79072 main.go:141] libmachine: Provisioning with buildroot...
	I0819 18:36:04.349347   79072 main.go:141] libmachine: (bridge-321572) Calling .GetMachineName
	I0819 18:36:04.349623   79072 buildroot.go:166] provisioning hostname "bridge-321572"
	I0819 18:36:04.349647   79072 main.go:141] libmachine: (bridge-321572) Calling .GetMachineName
	I0819 18:36:04.349800   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:04.352477   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.352881   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:04.352921   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.353092   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:04.353285   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.353465   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.353660   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:04.353833   79072 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:04.353994   79072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0819 18:36:04.354007   79072 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-321572 && echo "bridge-321572" | sudo tee /etc/hostname
	I0819 18:36:04.474969   79072 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-321572
	
	I0819 18:36:04.475007   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:04.477700   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.478165   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:04.478187   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.478397   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:04.478583   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.478774   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.478971   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:04.479170   79072 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:04.479405   79072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0819 18:36:04.479431   79072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-321572' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-321572/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-321572' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:36:04.593366   79072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:36:04.593400   79072 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19478-10654/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-10654/.minikube}
	I0819 18:36:04.593469   79072 buildroot.go:174] setting up certificates
	I0819 18:36:04.593480   79072 provision.go:84] configureAuth start
	I0819 18:36:04.593490   79072 main.go:141] libmachine: (bridge-321572) Calling .GetMachineName
	I0819 18:36:04.593803   79072 main.go:141] libmachine: (bridge-321572) Calling .GetIP
	I0819 18:36:04.596398   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.596775   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:04.596800   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.596934   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:04.599170   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.599515   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:04.599544   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.599711   79072 provision.go:143] copyHostCerts
	I0819 18:36:04.599768   79072 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem, removing ...
	I0819 18:36:04.599782   79072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem
	I0819 18:36:04.599855   79072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/ca.pem (1078 bytes)
	I0819 18:36:04.599962   79072 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem, removing ...
	I0819 18:36:04.599972   79072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem
	I0819 18:36:04.600000   79072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/cert.pem (1123 bytes)
	I0819 18:36:04.600072   79072 exec_runner.go:144] found /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem, removing ...
	I0819 18:36:04.600081   79072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem
	I0819 18:36:04.600104   79072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-10654/.minikube/key.pem (1679 bytes)
	I0819 18:36:04.600185   79072 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem org=jenkins.bridge-321572 san=[127.0.0.1 192.168.39.54 bridge-321572 localhost minikube]
	I0819 18:36:04.708482   79072 provision.go:177] copyRemoteCerts
	I0819 18:36:04.708545   79072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:36:04.708575   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:04.711623   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.711938   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:04.711972   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.712153   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:04.712352   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.712553   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:04.712690   79072 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/id_rsa Username:docker}
	I0819 18:36:04.794980   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:36:04.820093   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 18:36:04.842082   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:36:04.863580   79072 provision.go:87] duration metric: took 270.0892ms to configureAuth
	I0819 18:36:04.863616   79072 buildroot.go:189] setting minikube options for container-runtime
	I0819 18:36:04.863800   79072 config.go:182] Loaded profile config "bridge-321572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:36:04.863912   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:04.866819   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.867188   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:04.867221   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:04.867483   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:04.867721   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.867888   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:04.868060   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:04.868240   79072 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:04.868414   79072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0819 18:36:04.868427   79072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:36:05.132348   79072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:36:05.132370   79072 main.go:141] libmachine: Checking connection to Docker...
	I0819 18:36:05.132377   79072 main.go:141] libmachine: (bridge-321572) Calling .GetURL
	I0819 18:36:05.133885   79072 main.go:141] libmachine: (bridge-321572) DBG | Using libvirt version 6000000
	I0819 18:36:05.136237   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.136574   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:05.136603   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.136740   79072 main.go:141] libmachine: Docker is up and running!
	I0819 18:36:05.136769   79072 main.go:141] libmachine: Reticulating splines...
	I0819 18:36:05.136777   79072 client.go:171] duration metric: took 25.918045251s to LocalClient.Create
	I0819 18:36:05.136801   79072 start.go:167] duration metric: took 25.918117642s to libmachine.API.Create "bridge-321572"
	I0819 18:36:05.136813   79072 start.go:293] postStartSetup for "bridge-321572" (driver="kvm2")
	I0819 18:36:05.136825   79072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:36:05.136846   79072 main.go:141] libmachine: (bridge-321572) Calling .DriverName
	I0819 18:36:05.137059   79072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:36:05.137080   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:05.139423   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.139760   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:05.139787   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.139892   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:05.140146   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:05.140337   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:05.140531   79072 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/id_rsa Username:docker}
	I0819 18:36:05.224185   79072 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:36:05.228490   79072 info.go:137] Remote host: Buildroot 2023.02.9
	I0819 18:36:05.228517   79072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/addons for local assets ...
	I0819 18:36:05.228594   79072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-10654/.minikube/files for local assets ...
	I0819 18:36:05.228702   79072 filesync.go:149] local asset: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem -> 178372.pem in /etc/ssl/certs
	I0819 18:36:05.228885   79072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:36:05.239029   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:36:05.262358   79072 start.go:296] duration metric: took 125.511786ms for postStartSetup
	I0819 18:36:05.262421   79072 main.go:141] libmachine: (bridge-321572) Calling .GetConfigRaw
	I0819 18:36:05.263076   79072 main.go:141] libmachine: (bridge-321572) Calling .GetIP
	I0819 18:36:05.266192   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.266559   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:05.266582   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.266871   79072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/config.json ...
	I0819 18:36:05.267093   79072 start.go:128] duration metric: took 26.068722235s to createHost
	I0819 18:36:05.267116   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:05.269200   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.269619   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:05.269653   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.269801   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:05.270004   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:05.270157   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:05.270306   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:05.270479   79072 main.go:141] libmachine: Using SSH client type: native
	I0819 18:36:05.270680   79072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0819 18:36:05.270705   79072 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 18:36:05.381392   79072 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724092565.358039719
	
	I0819 18:36:05.381413   79072 fix.go:216] guest clock: 1724092565.358039719
	I0819 18:36:05.381423   79072 fix.go:229] Guest: 2024-08-19 18:36:05.358039719 +0000 UTC Remote: 2024-08-19 18:36:05.267105345 +0000 UTC m=+26.184434487 (delta=90.934374ms)
	I0819 18:36:05.381446   79072 fix.go:200] guest clock delta is within tolerance: 90.934374ms
	I0819 18:36:05.381452   79072 start.go:83] releasing machines lock for "bridge-321572", held for 26.183172171s
	I0819 18:36:05.381494   79072 main.go:141] libmachine: (bridge-321572) Calling .DriverName
	I0819 18:36:05.381745   79072 main.go:141] libmachine: (bridge-321572) Calling .GetIP
	I0819 18:36:05.384582   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.384964   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:05.384999   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.385195   79072 main.go:141] libmachine: (bridge-321572) Calling .DriverName
	I0819 18:36:05.385692   79072 main.go:141] libmachine: (bridge-321572) Calling .DriverName
	I0819 18:36:05.385890   79072 main.go:141] libmachine: (bridge-321572) Calling .DriverName
	I0819 18:36:05.386013   79072 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:36:05.386057   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:05.386089   79072 ssh_runner.go:195] Run: cat /version.json
	I0819 18:36:05.386105   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:05.388395   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.388791   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:05.388818   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.388842   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.388942   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:05.389100   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:05.389258   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:05.389299   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:05.389321   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:05.389438   79072 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/id_rsa Username:docker}
	I0819 18:36:05.389504   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:05.389656   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:05.389778   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:05.389923   79072 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/id_rsa Username:docker}
	I0819 18:36:05.465484   79072 ssh_runner.go:195] Run: systemctl --version
	I0819 18:36:05.507912   79072 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:36:05.670306   79072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 18:36:05.677091   79072 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 18:36:05.677158   79072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:36:05.693226   79072 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 18:36:05.693247   79072 start.go:495] detecting cgroup driver to use...
	I0819 18:36:05.693318   79072 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:36:05.710909   79072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:36:05.725302   79072 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:36:05.725366   79072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:36:05.739313   79072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:36:05.751592   79072 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:36:05.877443   79072 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:36:06.029128   79072 docker.go:233] disabling docker service ...
	I0819 18:36:06.029226   79072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:36:06.043034   79072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:36:06.056202   79072 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:36:06.193219   79072 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:36:06.320641   79072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:36:06.335772   79072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:36:06.354854   79072 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:36:06.354940   79072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:36:06.369337   79072 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:36:06.369428   79072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:36:06.382885   79072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:36:06.393309   79072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:36:06.404174   79072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:36:06.415132   79072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:36:06.425302   79072 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:36:06.442664   79072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:36:06.453178   79072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:36:06.462430   79072 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0819 18:36:06.462495   79072 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0819 18:36:06.476414   79072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:36:06.485608   79072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:36:06.608259   79072 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:36:06.745502   79072 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:36:06.745594   79072 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:36:06.755277   79072 start.go:563] Will wait 60s for crictl version
	I0819 18:36:06.755327   79072 ssh_runner.go:195] Run: which crictl
	I0819 18:36:06.758964   79072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:36:06.795544   79072 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0819 18:36:06.795608   79072 ssh_runner.go:195] Run: crio --version
	I0819 18:36:06.824158   79072 ssh_runner.go:195] Run: crio --version
	I0819 18:36:06.853863   79072 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0819 18:36:07.476320   76968 pod_ready.go:93] pod "coredns-6f6b679f8f-qkl7c" in "kube-system" namespace has status "Ready":"True"
	I0819 18:36:07.476345   76968 pod_ready.go:82] duration metric: took 15.010176816s for pod "coredns-6f6b679f8f-qkl7c" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.476356   76968 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.484115   76968 pod_ready.go:93] pod "etcd-flannel-321572" in "kube-system" namespace has status "Ready":"True"
	I0819 18:36:07.484155   76968 pod_ready.go:82] duration metric: took 7.790031ms for pod "etcd-flannel-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.484174   76968 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.491277   76968 pod_ready.go:93] pod "kube-apiserver-flannel-321572" in "kube-system" namespace has status "Ready":"True"
	I0819 18:36:07.491312   76968 pod_ready.go:82] duration metric: took 7.125222ms for pod "kube-apiserver-flannel-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.491329   76968 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.499732   76968 pod_ready.go:93] pod "kube-controller-manager-flannel-321572" in "kube-system" namespace has status "Ready":"True"
	I0819 18:36:07.499755   76968 pod_ready.go:82] duration metric: took 8.417108ms for pod "kube-controller-manager-flannel-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.499769   76968 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-v86pk" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.505687   76968 pod_ready.go:93] pod "kube-proxy-v86pk" in "kube-system" namespace has status "Ready":"True"
	I0819 18:36:07.505711   76968 pod_ready.go:82] duration metric: took 5.935305ms for pod "kube-proxy-v86pk" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.505723   76968 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.870638   76968 pod_ready.go:93] pod "kube-scheduler-flannel-321572" in "kube-system" namespace has status "Ready":"True"
	I0819 18:36:07.870660   76968 pod_ready.go:82] duration metric: took 364.929799ms for pod "kube-scheduler-flannel-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:07.870672   76968 pod_ready.go:39] duration metric: took 15.446282366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:36:07.870685   76968 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:36:07.870732   76968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:36:07.890058   76968 api_server.go:72] duration metric: took 25.312467834s to wait for apiserver process to appear ...
	I0819 18:36:07.890085   76968 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:36:07.890106   76968 api_server.go:253] Checking apiserver healthz at https://192.168.61.93:8443/healthz ...
	I0819 18:36:07.896012   76968 api_server.go:279] https://192.168.61.93:8443/healthz returned 200:
	ok
	I0819 18:36:07.897209   76968 api_server.go:141] control plane version: v1.31.0
	I0819 18:36:07.897232   76968 api_server.go:131] duration metric: took 7.14095ms to wait for apiserver health ...
	I0819 18:36:07.897240   76968 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:36:08.075229   76968 system_pods.go:59] 7 kube-system pods found
	I0819 18:36:08.075260   76968 system_pods.go:61] "coredns-6f6b679f8f-qkl7c" [aaa6e665-b9fb-4c61-abbd-16c7463ad42c] Running
	I0819 18:36:08.075267   76968 system_pods.go:61] "etcd-flannel-321572" [8c534a9e-948f-4302-a0e1-18697f0a4e6e] Running
	I0819 18:36:08.075274   76968 system_pods.go:61] "kube-apiserver-flannel-321572" [0c76e938-238e-4c62-917b-316e588bdc1f] Running
	I0819 18:36:08.075280   76968 system_pods.go:61] "kube-controller-manager-flannel-321572" [c789c42f-5f45-4f99-b11b-015d6bb191d0] Running
	I0819 18:36:08.075285   76968 system_pods.go:61] "kube-proxy-v86pk" [584332bb-2bd3-43b0-b0e8-079f03b9595a] Running
	I0819 18:36:08.075290   76968 system_pods.go:61] "kube-scheduler-flannel-321572" [6bbcef22-2fa5-4387-9844-9c59e838b040] Running
	I0819 18:36:08.075295   76968 system_pods.go:61] "storage-provisioner" [b3194ce5-43d0-4d3b-b8ac-4661fd32af5b] Running
	I0819 18:36:08.075304   76968 system_pods.go:74] duration metric: took 178.057054ms to wait for pod list to return data ...
	I0819 18:36:08.075318   76968 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:36:08.271051   76968 default_sa.go:45] found service account: "default"
	I0819 18:36:08.271079   76968 default_sa.go:55] duration metric: took 195.754736ms for default service account to be created ...
	I0819 18:36:08.271088   76968 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:36:08.472519   76968 system_pods.go:86] 7 kube-system pods found
	I0819 18:36:08.472547   76968 system_pods.go:89] "coredns-6f6b679f8f-qkl7c" [aaa6e665-b9fb-4c61-abbd-16c7463ad42c] Running
	I0819 18:36:08.472553   76968 system_pods.go:89] "etcd-flannel-321572" [8c534a9e-948f-4302-a0e1-18697f0a4e6e] Running
	I0819 18:36:08.472557   76968 system_pods.go:89] "kube-apiserver-flannel-321572" [0c76e938-238e-4c62-917b-316e588bdc1f] Running
	I0819 18:36:08.472560   76968 system_pods.go:89] "kube-controller-manager-flannel-321572" [c789c42f-5f45-4f99-b11b-015d6bb191d0] Running
	I0819 18:36:08.472564   76968 system_pods.go:89] "kube-proxy-v86pk" [584332bb-2bd3-43b0-b0e8-079f03b9595a] Running
	I0819 18:36:08.472567   76968 system_pods.go:89] "kube-scheduler-flannel-321572" [6bbcef22-2fa5-4387-9844-9c59e838b040] Running
	I0819 18:36:08.472571   76968 system_pods.go:89] "storage-provisioner" [b3194ce5-43d0-4d3b-b8ac-4661fd32af5b] Running
	I0819 18:36:08.472582   76968 system_pods.go:126] duration metric: took 201.483438ms to wait for k8s-apps to be running ...
	I0819 18:36:08.472588   76968 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:36:08.472639   76968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:36:08.486588   76968 system_svc.go:56] duration metric: took 13.983697ms WaitForService to wait for kubelet
	I0819 18:36:08.486631   76968 kubeadm.go:582] duration metric: took 25.909048188s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:36:08.486654   76968 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:36:08.672717   76968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:36:08.672769   76968 node_conditions.go:123] node cpu capacity is 2
	I0819 18:36:08.672786   76968 node_conditions.go:105] duration metric: took 186.125524ms to run NodePressure ...
	I0819 18:36:08.672801   76968 start.go:241] waiting for startup goroutines ...
	I0819 18:36:08.672810   76968 start.go:246] waiting for cluster config update ...
	I0819 18:36:08.672823   76968 start.go:255] writing updated cluster config ...
	I0819 18:36:08.673174   76968 ssh_runner.go:195] Run: rm -f paused
	I0819 18:36:08.722749   76968 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:36:08.725718   76968 out.go:177] * Done! kubectl is now configured to use "flannel-321572" cluster and "default" namespace by default
	I0819 18:36:06.855126   79072 main.go:141] libmachine: (bridge-321572) Calling .GetIP
	I0819 18:36:06.857568   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:06.857884   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:06.857915   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:06.858105   79072 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0819 18:36:06.861984   79072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:36:06.874188   79072 kubeadm.go:883] updating cluster {Name:bridge-321572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:bridge-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:36:06.874303   79072 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:36:06.874361   79072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:36:06.905653   79072 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0819 18:36:06.905753   79072 ssh_runner.go:195] Run: which lz4
	I0819 18:36:06.909813   79072 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 18:36:06.913815   79072 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 18:36:06.913848   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0819 18:36:08.163315   79072 crio.go:462] duration metric: took 1.253525789s to copy over tarball
	I0819 18:36:08.163399   79072 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 18:36:10.366204   79072 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.202771577s)
	I0819 18:36:10.366239   79072 crio.go:469] duration metric: took 2.2028904s to extract the tarball
	I0819 18:36:10.366249   79072 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 18:36:10.403803   79072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:36:10.447085   79072 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:36:10.447126   79072 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:36:10.447137   79072 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.31.0 crio true true} ...
	I0819 18:36:10.447264   79072 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-321572 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:bridge-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0819 18:36:10.447358   79072 ssh_runner.go:195] Run: crio config
	I0819 18:36:10.489976   79072 cni.go:84] Creating CNI manager for "bridge"
	I0819 18:36:10.489998   79072 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:36:10.490020   79072 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-321572 NodeName:bridge-321572 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:36:10.490178   79072 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-321572"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:36:10.490255   79072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:36:10.500096   79072 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:36:10.500156   79072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 18:36:10.509283   79072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0819 18:36:10.525460   79072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:36:10.543038   79072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0819 18:36:10.562012   79072 ssh_runner.go:195] Run: grep 192.168.39.54	control-plane.minikube.internal$ /etc/hosts
	I0819 18:36:10.565798   79072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:36:10.577543   79072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:36:10.706194   79072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:36:10.723035   79072 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572 for IP: 192.168.39.54
	I0819 18:36:10.723065   79072 certs.go:194] generating shared ca certs ...
	I0819 18:36:10.723085   79072 certs.go:226] acquiring lock for ca certs: {Name:mkedc27ba706b77c42f0fc763dcfbebc3047bc71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:10.723304   79072 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key
	I0819 18:36:10.723385   79072 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key
	I0819 18:36:10.723400   79072 certs.go:256] generating profile certs ...
	I0819 18:36:10.723474   79072 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/client.key
	I0819 18:36:10.723502   79072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/client.crt with IP's: []
	I0819 18:36:10.948852   79072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/client.crt ...
	I0819 18:36:10.948881   79072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/client.crt: {Name:mkcbfa102865ea035b603ed1b9e516a16b2bf39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:10.949050   79072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/client.key ...
	I0819 18:36:10.949060   79072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/client.key: {Name:mk254b05db77c7a2d536726f62e945a0cf771590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:10.949136   79072 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.key.350d4c4c
	I0819 18:36:10.949151   79072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.crt.350d4c4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.54]
	I0819 18:36:11.245155   79072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.crt.350d4c4c ...
	I0819 18:36:11.245190   79072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.crt.350d4c4c: {Name:mkea4d4dba630690f2b6e57bca97828c3734620e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:11.245374   79072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.key.350d4c4c ...
	I0819 18:36:11.245390   79072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.key.350d4c4c: {Name:mkb1cac1f95b139c7cc965c79d056065bd536b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:11.245469   79072 certs.go:381] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.crt.350d4c4c -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.crt
	I0819 18:36:11.245600   79072 certs.go:385] copying /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.key.350d4c4c -> /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.key
	I0819 18:36:11.245710   79072 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/proxy-client.key
	I0819 18:36:11.245736   79072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/proxy-client.crt with IP's: []
	I0819 18:36:11.304078   79072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/proxy-client.crt ...
	I0819 18:36:11.304110   79072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/proxy-client.crt: {Name:mkbf01390ee2412b08950b5d05bfb7d6935c986e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:11.304277   79072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/proxy-client.key ...
	I0819 18:36:11.304289   79072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/proxy-client.key: {Name:mk5056415d2a17956df2447032d5b1db70846892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:11.304496   79072 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem (1338 bytes)
	W0819 18:36:11.304536   79072 certs.go:480] ignoring /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837_empty.pem, impossibly tiny 0 bytes
	I0819 18:36:11.304548   79072 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 18:36:11.304572   79072 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:36:11.304596   79072 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:36:11.304621   79072 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/certs/key.pem (1679 bytes)
	I0819 18:36:11.304660   79072 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem (1708 bytes)
	I0819 18:36:11.305260   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:36:11.330559   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:36:11.352890   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:36:11.377230   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:36:11.402447   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 18:36:11.431486   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:36:11.461787   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:36:11.486845   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/bridge-321572/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:36:11.508176   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/ssl/certs/178372.pem --> /usr/share/ca-certificates/178372.pem (1708 bytes)
	I0819 18:36:11.529800   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:36:11.551658   79072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-10654/.minikube/certs/17837.pem --> /usr/share/ca-certificates/17837.pem (1338 bytes)
	I0819 18:36:11.574879   79072 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:36:11.590404   79072 ssh_runner.go:195] Run: openssl version
	I0819 18:36:11.596050   79072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178372.pem && ln -fs /usr/share/ca-certificates/178372.pem /etc/ssl/certs/178372.pem"
	I0819 18:36:11.606266   79072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178372.pem
	I0819 18:36:11.610694   79072 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 17:05 /usr/share/ca-certificates/178372.pem
	I0819 18:36:11.610756   79072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178372.pem
	I0819 18:36:11.616406   79072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178372.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:36:11.626443   79072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:36:11.636389   79072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:36:11.640518   79072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 16:53 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:36:11.640567   79072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:36:11.645761   79072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:36:11.655500   79072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17837.pem && ln -fs /usr/share/ca-certificates/17837.pem /etc/ssl/certs/17837.pem"
	I0819 18:36:11.666098   79072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17837.pem
	I0819 18:36:11.670211   79072 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 17:05 /usr/share/ca-certificates/17837.pem
	I0819 18:36:11.670267   79072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17837.pem
	I0819 18:36:11.675575   79072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17837.pem /etc/ssl/certs/51391683.0"
	I0819 18:36:11.685656   79072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:36:11.689135   79072 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:36:11.689184   79072 kubeadm.go:392] StartCluster: {Name:bridge-321572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:bridge-321572 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:36:11.689254   79072 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:36:11.689306   79072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:36:11.721642   79072 cri.go:89] found id: ""
	I0819 18:36:11.721705   79072 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:36:11.730792   79072 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 18:36:11.740143   79072 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 18:36:11.750189   79072 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 18:36:11.750215   79072 kubeadm.go:157] found existing configuration files:
	
	I0819 18:36:11.750264   79072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 18:36:11.760111   79072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 18:36:11.760182   79072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 18:36:11.770665   79072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 18:36:11.780337   79072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 18:36:11.780388   79072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 18:36:11.789772   79072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 18:36:11.798450   79072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 18:36:11.798506   79072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 18:36:11.807361   79072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 18:36:11.815863   79072 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 18:36:11.815941   79072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 18:36:11.824743   79072 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 18:36:11.875939   79072 kubeadm.go:310] W0819 18:36:11.859389     855 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:36:11.876697   79072 kubeadm.go:310] W0819 18:36:11.860333     855 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 18:36:11.974463   79072 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 18:36:22.081426   79072 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 18:36:22.081504   79072 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 18:36:22.081582   79072 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 18:36:22.081678   79072 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 18:36:22.081797   79072 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 18:36:22.081852   79072 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 18:36:22.083386   79072 out.go:235]   - Generating certificates and keys ...
	I0819 18:36:22.083481   79072 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 18:36:22.083688   79072 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 18:36:22.083803   79072 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 18:36:22.083858   79072 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 18:36:22.083928   79072 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 18:36:22.083999   79072 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 18:36:22.084093   79072 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 18:36:22.084272   79072 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-321572 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0819 18:36:22.084356   79072 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 18:36:22.084496   79072 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-321572 localhost] and IPs [192.168.39.54 127.0.0.1 ::1]
	I0819 18:36:22.084604   79072 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 18:36:22.084708   79072 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 18:36:22.084807   79072 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 18:36:22.084884   79072 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 18:36:22.084952   79072 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 18:36:22.085029   79072 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 18:36:22.085112   79072 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 18:36:22.085209   79072 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 18:36:22.085270   79072 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 18:36:22.085374   79072 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 18:36:22.085438   79072 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 18:36:22.086968   79072 out.go:235]   - Booting up control plane ...
	I0819 18:36:22.087076   79072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 18:36:22.087172   79072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 18:36:22.087253   79072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 18:36:22.087370   79072 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 18:36:22.087477   79072 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 18:36:22.087537   79072 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 18:36:22.087678   79072 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 18:36:22.087786   79072 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 18:36:22.087874   79072 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001366198s
	I0819 18:36:22.087951   79072 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 18:36:22.088026   79072 kubeadm.go:310] [api-check] The API server is healthy after 5.001051062s
	I0819 18:36:22.088143   79072 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 18:36:22.088239   79072 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 18:36:22.088285   79072 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 18:36:22.088481   79072 kubeadm.go:310] [mark-control-plane] Marking the node bridge-321572 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 18:36:22.088557   79072 kubeadm.go:310] [bootstrap-token] Using token: aneew1.38uhb0867fwyemu8
	I0819 18:36:22.089981   79072 out.go:235]   - Configuring RBAC rules ...
	I0819 18:36:22.090094   79072 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 18:36:22.090172   79072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 18:36:22.090295   79072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 18:36:22.090413   79072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 18:36:22.090521   79072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 18:36:22.090595   79072 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 18:36:22.090683   79072 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 18:36:22.090727   79072 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 18:36:22.090764   79072 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 18:36:22.090770   79072 kubeadm.go:310] 
	I0819 18:36:22.090815   79072 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 18:36:22.090821   79072 kubeadm.go:310] 
	I0819 18:36:22.090897   79072 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 18:36:22.090904   79072 kubeadm.go:310] 
	I0819 18:36:22.090938   79072 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 18:36:22.090988   79072 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 18:36:22.091030   79072 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 18:36:22.091037   79072 kubeadm.go:310] 
	I0819 18:36:22.091084   79072 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 18:36:22.091093   79072 kubeadm.go:310] 
	I0819 18:36:22.091146   79072 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 18:36:22.091153   79072 kubeadm.go:310] 
	I0819 18:36:22.091192   79072 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 18:36:22.091270   79072 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 18:36:22.091361   79072 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 18:36:22.091370   79072 kubeadm.go:310] 
	I0819 18:36:22.091441   79072 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 18:36:22.091506   79072 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 18:36:22.091513   79072 kubeadm.go:310] 
	I0819 18:36:22.091584   79072 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token aneew1.38uhb0867fwyemu8 \
	I0819 18:36:22.091676   79072 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 \
	I0819 18:36:22.091697   79072 kubeadm.go:310] 	--control-plane 
	I0819 18:36:22.091701   79072 kubeadm.go:310] 
	I0819 18:36:22.091796   79072 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 18:36:22.091815   79072 kubeadm.go:310] 
	I0819 18:36:22.091910   79072 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token aneew1.38uhb0867fwyemu8 \
	I0819 18:36:22.092058   79072 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2bdb1375495f2436c23fa8022a946ca0060755b2ed80f99d0084e3880c32c9c2 
	I0819 18:36:22.092074   79072 cni.go:84] Creating CNI manager for "bridge"
	I0819 18:36:22.093703   79072 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 18:36:22.094820   79072 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 18:36:22.105721   79072 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 18:36:22.124739   79072 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 18:36:22.124873   79072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:36:22.124878   79072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-321572 minikube.k8s.io/updated_at=2024_08_19T18_36_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=bridge-321572 minikube.k8s.io/primary=true
	I0819 18:36:22.153319   79072 ops.go:34] apiserver oom_adj: -16
	I0819 18:36:22.277690   79072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:36:22.778096   79072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:36:23.277785   79072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:36:23.778425   79072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:36:24.278700   79072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:36:24.778649   79072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:36:25.278037   79072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:36:25.777975   79072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 18:36:25.870860   79072 kubeadm.go:1113] duration metric: took 3.746058837s to wait for elevateKubeSystemPrivileges
	I0819 18:36:25.870898   79072 kubeadm.go:394] duration metric: took 14.181715163s to StartCluster
	I0819 18:36:25.870919   79072 settings.go:142] acquiring lock: {Name:mk28a13f3174b598dc486d0431c92387ef1e3e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:25.871005   79072 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 18:36:25.873116   79072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-10654/kubeconfig: {Name:mkf294a222534997c5f8c7826543e50ace133713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:36:25.873391   79072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 18:36:25.873428   79072 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:36:25.873401   79072 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:36:25.873496   79072 addons.go:69] Setting storage-provisioner=true in profile "bridge-321572"
	I0819 18:36:25.873512   79072 addons.go:69] Setting default-storageclass=true in profile "bridge-321572"
	I0819 18:36:25.873522   79072 addons.go:234] Setting addon storage-provisioner=true in "bridge-321572"
	I0819 18:36:25.873555   79072 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-321572"
	I0819 18:36:25.873611   79072 config.go:182] Loaded profile config "bridge-321572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:36:25.873557   79072 host.go:66] Checking if "bridge-321572" exists ...
	I0819 18:36:25.874008   79072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:25.874038   79072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:25.874114   79072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:25.874148   79072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:25.875277   79072 out.go:177] * Verifying Kubernetes components...
	I0819 18:36:25.876572   79072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:36:25.894961   79072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0819 18:36:25.895013   79072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I0819 18:36:25.895449   79072 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:25.895490   79072 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:25.895962   79072 main.go:141] libmachine: Using API Version  1
	I0819 18:36:25.895978   79072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:25.896131   79072 main.go:141] libmachine: Using API Version  1
	I0819 18:36:25.896145   79072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:25.896322   79072 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:25.896456   79072 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:25.896530   79072 main.go:141] libmachine: (bridge-321572) Calling .GetState
	I0819 18:36:25.897048   79072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:25.897077   79072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:25.900922   79072 addons.go:234] Setting addon default-storageclass=true in "bridge-321572"
	I0819 18:36:25.900967   79072 host.go:66] Checking if "bridge-321572" exists ...
	I0819 18:36:25.901356   79072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:25.901387   79072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:25.914355   79072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45893
	I0819 18:36:25.914896   79072 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:25.915473   79072 main.go:141] libmachine: Using API Version  1
	I0819 18:36:25.915505   79072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:25.915874   79072 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:25.916074   79072 main.go:141] libmachine: (bridge-321572) Calling .GetState
	I0819 18:36:25.916348   79072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40621
	I0819 18:36:25.916804   79072 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:25.917605   79072 main.go:141] libmachine: Using API Version  1
	I0819 18:36:25.917630   79072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:25.917998   79072 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:25.918048   79072 main.go:141] libmachine: (bridge-321572) Calling .DriverName
	I0819 18:36:25.918481   79072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 18:36:25.918510   79072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 18:36:25.920212   79072 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 18:36:25.921663   79072 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:36:25.921680   79072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 18:36:25.921698   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:25.924871   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:25.925262   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:25.925292   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:25.925428   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:25.925604   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:25.925739   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:25.925878   79072 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/id_rsa Username:docker}
	I0819 18:36:25.934751   79072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37171
	I0819 18:36:25.935242   79072 main.go:141] libmachine: () Calling .GetVersion
	I0819 18:36:25.935808   79072 main.go:141] libmachine: Using API Version  1
	I0819 18:36:25.935834   79072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 18:36:25.936218   79072 main.go:141] libmachine: () Calling .GetMachineName
	I0819 18:36:25.936397   79072 main.go:141] libmachine: (bridge-321572) Calling .GetState
	I0819 18:36:25.937954   79072 main.go:141] libmachine: (bridge-321572) Calling .DriverName
	I0819 18:36:25.938160   79072 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 18:36:25.938176   79072 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 18:36:25.938189   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHHostname
	I0819 18:36:25.940888   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:25.941278   79072 main.go:141] libmachine: (bridge-321572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:2c:60", ip: ""} in network mk-bridge-321572: {Iface:virbr1 ExpiryTime:2024-08-19 19:35:54 +0000 UTC Type:0 Mac:52:54:00:3e:2c:60 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:bridge-321572 Clientid:01:52:54:00:3e:2c:60}
	I0819 18:36:25.941314   79072 main.go:141] libmachine: (bridge-321572) DBG | domain bridge-321572 has defined IP address 192.168.39.54 and MAC address 52:54:00:3e:2c:60 in network mk-bridge-321572
	I0819 18:36:25.941417   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHPort
	I0819 18:36:25.941614   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHKeyPath
	I0819 18:36:25.941794   79072 main.go:141] libmachine: (bridge-321572) Calling .GetSSHUsername
	I0819 18:36:25.941916   79072 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/bridge-321572/id_rsa Username:docker}
	I0819 18:36:26.080398   79072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 18:36:26.101325   79072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:36:26.261311   79072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 18:36:26.356744   79072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 18:36:26.596458   79072 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0819 18:36:26.596596   79072 main.go:141] libmachine: Making call to close driver server
	I0819 18:36:26.596622   79072 main.go:141] libmachine: (bridge-321572) Calling .Close
	I0819 18:36:26.596935   79072 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:36:26.596950   79072 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:36:26.596960   79072 main.go:141] libmachine: Making call to close driver server
	I0819 18:36:26.596969   79072 main.go:141] libmachine: (bridge-321572) Calling .Close
	I0819 18:36:26.597183   79072 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:36:26.597211   79072 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:36:26.597236   79072 main.go:141] libmachine: (bridge-321572) DBG | Closing plugin on server side
	I0819 18:36:26.598192   79072 node_ready.go:35] waiting up to 15m0s for node "bridge-321572" to be "Ready" ...
	I0819 18:36:26.613203   79072 node_ready.go:49] node "bridge-321572" has status "Ready":"True"
	I0819 18:36:26.613226   79072 node_ready.go:38] duration metric: took 15.0107ms for node "bridge-321572" to be "Ready" ...
	I0819 18:36:26.613238   79072 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:36:26.624732   79072 main.go:141] libmachine: Making call to close driver server
	I0819 18:36:26.624782   79072 main.go:141] libmachine: (bridge-321572) Calling .Close
	I0819 18:36:26.625062   79072 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:36:26.625080   79072 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:36:26.639362   79072 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace to be "Ready" ...
	I0819 18:36:27.102279   79072 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-321572" context rescaled to 1 replicas
	I0819 18:36:27.132115   79072 main.go:141] libmachine: Making call to close driver server
	I0819 18:36:27.132148   79072 main.go:141] libmachine: (bridge-321572) Calling .Close
	I0819 18:36:27.132550   79072 main.go:141] libmachine: (bridge-321572) DBG | Closing plugin on server side
	I0819 18:36:27.132583   79072 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:36:27.132595   79072 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:36:27.132608   79072 main.go:141] libmachine: Making call to close driver server
	I0819 18:36:27.132616   79072 main.go:141] libmachine: (bridge-321572) Calling .Close
	I0819 18:36:27.132912   79072 main.go:141] libmachine: (bridge-321572) DBG | Closing plugin on server side
	I0819 18:36:27.132956   79072 main.go:141] libmachine: Successfully made call to close driver server
	I0819 18:36:27.132966   79072 main.go:141] libmachine: Making call to close connection to plugin binary
	I0819 18:36:27.134895   79072 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0819 18:36:27.136682   79072 addons.go:510] duration metric: took 1.263252555s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0819 18:36:28.646223   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:30.646939   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:33.147633   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:35.647771   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:37.649670   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:40.146422   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:42.646353   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:45.145821   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:47.646052   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:50.145772   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:52.646800   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:55.146356   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:36:57.648213   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:00.146504   79072 pod_ready.go:103] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"False"
	I0819 18:37:02.154616   79072 pod_ready.go:93] pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:02.154641   79072 pod_ready.go:82] duration metric: took 35.515249305s for pod "coredns-6f6b679f8f-96l6w" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.154653   79072 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-t5fgr" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.158642   79072 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-t5fgr" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-t5fgr" not found
	I0819 18:37:02.158665   79072 pod_ready.go:82] duration metric: took 4.005442ms for pod "coredns-6f6b679f8f-t5fgr" in "kube-system" namespace to be "Ready" ...
	E0819 18:37:02.158675   79072 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-t5fgr" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-t5fgr" not found
	I0819 18:37:02.158681   79072 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.164714   79072 pod_ready.go:93] pod "etcd-bridge-321572" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:02.164731   79072 pod_ready.go:82] duration metric: took 6.044106ms for pod "etcd-bridge-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.164740   79072 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.172671   79072 pod_ready.go:93] pod "kube-apiserver-bridge-321572" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:02.172691   79072 pod_ready.go:82] duration metric: took 7.944443ms for pod "kube-apiserver-bridge-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.172700   79072 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.178245   79072 pod_ready.go:93] pod "kube-controller-manager-bridge-321572" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:02.178274   79072 pod_ready.go:82] duration metric: took 5.566683ms for pod "kube-controller-manager-bridge-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.178287   79072 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-z2rgj" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.344079   79072 pod_ready.go:93] pod "kube-proxy-z2rgj" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:02.344104   79072 pod_ready.go:82] duration metric: took 165.810116ms for pod "kube-proxy-z2rgj" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.344113   79072 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.744300   79072 pod_ready.go:93] pod "kube-scheduler-bridge-321572" in "kube-system" namespace has status "Ready":"True"
	I0819 18:37:02.744325   79072 pod_ready.go:82] duration metric: took 400.204708ms for pod "kube-scheduler-bridge-321572" in "kube-system" namespace to be "Ready" ...
	I0819 18:37:02.744337   79072 pod_ready.go:39] duration metric: took 36.131084158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:37:02.744354   79072 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:37:02.744411   79072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:37:02.760009   79072 api_server.go:72] duration metric: took 36.886505283s to wait for apiserver process to appear ...
	I0819 18:37:02.760044   79072 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:37:02.760064   79072 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0819 18:37:02.765293   79072 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0819 18:37:02.766181   79072 api_server.go:141] control plane version: v1.31.0
	I0819 18:37:02.766207   79072 api_server.go:131] duration metric: took 6.155695ms to wait for apiserver health ...
	I0819 18:37:02.766214   79072 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:37:02.947539   79072 system_pods.go:59] 7 kube-system pods found
	I0819 18:37:02.947570   79072 system_pods.go:61] "coredns-6f6b679f8f-96l6w" [d8767880-fbd7-4d7a-9982-863a52467c8b] Running
	I0819 18:37:02.947577   79072 system_pods.go:61] "etcd-bridge-321572" [2bfe3825-63f9-41f7-8321-04d9c4874721] Running
	I0819 18:37:02.947584   79072 system_pods.go:61] "kube-apiserver-bridge-321572" [f8bb751c-5e01-46c2-b7a5-c5518b699d5b] Running
	I0819 18:37:02.947589   79072 system_pods.go:61] "kube-controller-manager-bridge-321572" [665ed3e1-c833-4a11-94d9-9fa9a760ab62] Running
	I0819 18:37:02.947595   79072 system_pods.go:61] "kube-proxy-z2rgj" [6feb53de-c8a2-4630-8b84-33bc9098b3ee] Running
	I0819 18:37:02.947600   79072 system_pods.go:61] "kube-scheduler-bridge-321572" [03832bed-d4f0-4d69-8450-b55f59b42fd0] Running
	I0819 18:37:02.947604   79072 system_pods.go:61] "storage-provisioner" [6354119e-447c-4a1b-9b96-6b111f08ed3d] Running
	I0819 18:37:02.947612   79072 system_pods.go:74] duration metric: took 181.39186ms to wait for pod list to return data ...
	I0819 18:37:02.947623   79072 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:37:03.144265   79072 default_sa.go:45] found service account: "default"
	I0819 18:37:03.144294   79072 default_sa.go:55] duration metric: took 196.664058ms for default service account to be created ...
	I0819 18:37:03.144304   79072 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:37:03.345809   79072 system_pods.go:86] 7 kube-system pods found
	I0819 18:37:03.345836   79072 system_pods.go:89] "coredns-6f6b679f8f-96l6w" [d8767880-fbd7-4d7a-9982-863a52467c8b] Running
	I0819 18:37:03.345844   79072 system_pods.go:89] "etcd-bridge-321572" [2bfe3825-63f9-41f7-8321-04d9c4874721] Running
	I0819 18:37:03.345849   79072 system_pods.go:89] "kube-apiserver-bridge-321572" [f8bb751c-5e01-46c2-b7a5-c5518b699d5b] Running
	I0819 18:37:03.345855   79072 system_pods.go:89] "kube-controller-manager-bridge-321572" [665ed3e1-c833-4a11-94d9-9fa9a760ab62] Running
	I0819 18:37:03.345860   79072 system_pods.go:89] "kube-proxy-z2rgj" [6feb53de-c8a2-4630-8b84-33bc9098b3ee] Running
	I0819 18:37:03.345865   79072 system_pods.go:89] "kube-scheduler-bridge-321572" [03832bed-d4f0-4d69-8450-b55f59b42fd0] Running
	I0819 18:37:03.345869   79072 system_pods.go:89] "storage-provisioner" [6354119e-447c-4a1b-9b96-6b111f08ed3d] Running
	I0819 18:37:03.345877   79072 system_pods.go:126] duration metric: took 201.567348ms to wait for k8s-apps to be running ...
	I0819 18:37:03.345886   79072 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:37:03.345940   79072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:37:03.359510   79072 system_svc.go:56] duration metric: took 13.619174ms WaitForService to wait for kubelet
	I0819 18:37:03.359535   79072 kubeadm.go:582] duration metric: took 37.486033942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:37:03.359556   79072 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:37:03.544126   79072 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0819 18:37:03.544155   79072 node_conditions.go:123] node cpu capacity is 2
	I0819 18:37:03.544165   79072 node_conditions.go:105] duration metric: took 184.604227ms to run NodePressure ...
	I0819 18:37:03.544176   79072 start.go:241] waiting for startup goroutines ...
	I0819 18:37:03.544182   79072 start.go:246] waiting for cluster config update ...
	I0819 18:37:03.544191   79072 start.go:255] writing updated cluster config ...
	I0819 18:37:03.544448   79072 ssh_runner.go:195] Run: rm -f paused
	I0819 18:37:03.589959   79072 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:37:03.592059   79072 out.go:177] * Done! kubectl is now configured to use "bridge-321572" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.814218866Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da96a1cd-68cf-439e-9d98-738ecf4c5fdf name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.815330569Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a05be7b4-d3dd-4506-b748-0dd28776856e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.815869255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092896815827684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a05be7b4-d3dd-4506-b748-0dd28776856e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.816470271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3dc683f1-ace7-483d-94a7-dfcaa2f54c01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.816521721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3dc683f1-ace7-483d-94a7-dfcaa2f54c01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.816761393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3faf70767cdd17e24d6e1b4db0567223e255470076752063a77581e145a0dc8,PodSandboxId:7a554e7e3cbbc9d3c1415fd8b7008f96a99f5f26366a33f6f069f28f9ffb21c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091964922716454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d63f30-45fd-48f4-973e-6a72cf931b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4022599b0f0e3ce749dd86ffc596158a3475c478feb4f0eb263a491a6b0516da,PodSandboxId:f18d0c432227a7ac31d7293efb3cfa298ee31a157efa522eab7ae37a7c662b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963919466536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-274qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af408da7-683b-4730-b836-a5ae446e84d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc90a845e481d8f633afeb08081ffc98aff79f486ca4da983c0445cb9fe2d7ae,PodSandboxId:3985f838704b1effb6ccc0a0b182f26a9c15766fe86e18f396a0b1a4facc3c03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963633859255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j764j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
26e772d-dd20-4427-b8b2-40422b5be1ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723539f4118447a68b54db513e910e6ab32f3d95e1aed93954cd7d2d773b8d,PodSandboxId:58454aa433bddcf369586bf70d4c6791b7e21f6de548bf04495ae8c717b8fdf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724091963045903192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f004f8f-d49f-468e-acac-a7d691c9cdba,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc556da0574245e8e95d984662de27da7b8afd3e9298b5e7153c0c96fae3de03,PodSandboxId:887216af0d85d3e17893b213189d59f83986e97f76179ea18b889fa795c71d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091952378966636,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 584eb78fa73054250a13e68afac29f82,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d45d5ec1be7cbf5551fde3dc6de4df5580c09d131c4d9259b568cc502ab095,PodSandboxId:fca70f617a3a15b55ccca38784eae2141a94e46e5e3f43598cf737c310746ce5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091952349629182,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94116d3e73bcb15d654acb9df8f832aa6eb95e63312abc9bf6bf8b592d9ce7d7,PodSandboxId:0322d593e3c29deb75ce6ea0cadbae1b701ca4dd7d848e6d9c6f7c6ebc81041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091952282550985,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61941e45b337edba2e6d09e2044800d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd452eae270cdc3d1687ad1a6f93ed1d8d44f3ccba8984d03d1c42608660263c,PodSandboxId:e63256173f447a4709e23d5a577b3383b611e43247b0d254d3e56a92169815a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091952299578879,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef10e3f64821ad739cb86e41c4230360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bcd811e39e2bea53c5abb19fee57148fa6be86be809d6756d052f4f11e29cc7,PodSandboxId:b156d94d8add233e03bc73b725f52da52755d62d3eeea5a87b8e606b0b45125d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091663846249517,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3dc683f1-ace7-483d-94a7-dfcaa2f54c01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.852300906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd08953b-6b5c-4986-a63c-ac752bfefa19 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.852389760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd08953b-6b5c-4986-a63c-ac752bfefa19 name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.853522156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=411adba8-bf23-4dc4-93ea-e2c99d6e9f2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.853992967Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092896853966061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=411adba8-bf23-4dc4-93ea-e2c99d6e9f2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.854416020Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f828251-deeb-48be-b3a0-2f05d927cdca name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.854486406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f828251-deeb-48be-b3a0-2f05d927cdca name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.854791825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3faf70767cdd17e24d6e1b4db0567223e255470076752063a77581e145a0dc8,PodSandboxId:7a554e7e3cbbc9d3c1415fd8b7008f96a99f5f26366a33f6f069f28f9ffb21c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091964922716454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d63f30-45fd-48f4-973e-6a72cf931b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4022599b0f0e3ce749dd86ffc596158a3475c478feb4f0eb263a491a6b0516da,PodSandboxId:f18d0c432227a7ac31d7293efb3cfa298ee31a157efa522eab7ae37a7c662b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963919466536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-274qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af408da7-683b-4730-b836-a5ae446e84d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc90a845e481d8f633afeb08081ffc98aff79f486ca4da983c0445cb9fe2d7ae,PodSandboxId:3985f838704b1effb6ccc0a0b182f26a9c15766fe86e18f396a0b1a4facc3c03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963633859255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j764j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
26e772d-dd20-4427-b8b2-40422b5be1ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723539f4118447a68b54db513e910e6ab32f3d95e1aed93954cd7d2d773b8d,PodSandboxId:58454aa433bddcf369586bf70d4c6791b7e21f6de548bf04495ae8c717b8fdf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724091963045903192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f004f8f-d49f-468e-acac-a7d691c9cdba,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc556da0574245e8e95d984662de27da7b8afd3e9298b5e7153c0c96fae3de03,PodSandboxId:887216af0d85d3e17893b213189d59f83986e97f76179ea18b889fa795c71d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091952378966636,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 584eb78fa73054250a13e68afac29f82,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d45d5ec1be7cbf5551fde3dc6de4df5580c09d131c4d9259b568cc502ab095,PodSandboxId:fca70f617a3a15b55ccca38784eae2141a94e46e5e3f43598cf737c310746ce5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091952349629182,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94116d3e73bcb15d654acb9df8f832aa6eb95e63312abc9bf6bf8b592d9ce7d7,PodSandboxId:0322d593e3c29deb75ce6ea0cadbae1b701ca4dd7d848e6d9c6f7c6ebc81041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091952282550985,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61941e45b337edba2e6d09e2044800d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd452eae270cdc3d1687ad1a6f93ed1d8d44f3ccba8984d03d1c42608660263c,PodSandboxId:e63256173f447a4709e23d5a577b3383b611e43247b0d254d3e56a92169815a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091952299578879,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef10e3f64821ad739cb86e41c4230360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bcd811e39e2bea53c5abb19fee57148fa6be86be809d6756d052f4f11e29cc7,PodSandboxId:b156d94d8add233e03bc73b725f52da52755d62d3eeea5a87b8e606b0b45125d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091663846249517,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f828251-deeb-48be-b3a0-2f05d927cdca name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.874707979Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=251dac1b-81ac-4c8d-9224-701e1a309514 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.874967447Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7a554e7e3cbbc9d3c1415fd8b7008f96a99f5f26366a33f6f069f28f9ffb21c9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:26d63f30-45fd-48f4-973e-6a72cf931b9d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091964821210966,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d63f30-45fd-48f4-973e-6a72cf931b9d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-19T18:26:04.510029589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:923f5bbdccbf220daf9a4cd88b6aff2db9b4cf759b9a7b852c59cd16ba8f423f,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-j8qbw,Uid:6c7ec046-01e2-4903-9937-c79aabc81bb2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091964667482671,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-j8qbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c7ec046-01e2-4903-9937-c79aabc81bb
2,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:26:04.361325271Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f18d0c432227a7ac31d7293efb3cfa298ee31a157efa522eab7ae37a7c662b45,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-274qq,Uid:af408da7-683b-4730-b836-a5ae446e84d4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091963033498270,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-274qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af408da7-683b-4730-b836-a5ae446e84d4,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:26:02.723511264Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3985f838704b1effb6ccc0a0b182f26a9c15766fe86e18f396a0b1a4facc3c03,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-j764j,Uid:726e772d-dd20-4427
-b8b2-40422b5be1ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091963031058924,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-j764j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726e772d-dd20-4427-b8b2-40422b5be1ef,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:26:02.695433875Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:58454aa433bddcf369586bf70d4c6791b7e21f6de548bf04495ae8c717b8fdf6,Metadata:&PodSandboxMetadata{Name:kube-proxy-df5kf,Uid:0f004f8f-d49f-468e-acac-a7d691c9cdba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091962857367234,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-df5kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f004f8f-d49f-468e-acac-a7d691c9cdba,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-19T18:26:02.547507824Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fca70f617a3a15b55ccca38784eae2141a94e46e5e3f43598cf737c310746ce5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-306581,Uid:aabf286bc9c738fac48e9947f3fc0100,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724091952130021886,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.181:8443,kubernetes.io/config.hash: aabf286bc9c738fac48e9947f3fc0100,kubernetes.io/config.seen: 2024-08-19T18:25:51.674524755Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e63256173f447a4709e23d5a577b
3383b611e43247b0d254d3e56a92169815a6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-306581,Uid:ef10e3f64821ad739cb86e41c4230360,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091952128024771,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef10e3f64821ad739cb86e41c4230360,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ef10e3f64821ad739cb86e41c4230360,kubernetes.io/config.seen: 2024-08-19T18:25:51.674526946Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:887216af0d85d3e17893b213189d59f83986e97f76179ea18b889fa795c71d21,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-306581,Uid:584eb78fa73054250a13e68afac29f82,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091952125852315,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,
io.kubernetes.pod.name: etcd-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 584eb78fa73054250a13e68afac29f82,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.181:2379,kubernetes.io/config.hash: 584eb78fa73054250a13e68afac29f82,kubernetes.io/config.seen: 2024-08-19T18:25:51.674520273Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0322d593e3c29deb75ce6ea0cadbae1b701ca4dd7d848e6d9c6f7c6ebc81041b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-306581,Uid:d61941e45b337edba2e6d09e2044800d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724091952123544004,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61941e45b337edba2e6d09e2044800d,tier: control-plane,},Annotations:map[str
ing]string{kubernetes.io/config.hash: d61941e45b337edba2e6d09e2044800d,kubernetes.io/config.seen: 2024-08-19T18:25:51.674525794Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b156d94d8add233e03bc73b725f52da52755d62d3eeea5a87b8e606b0b45125d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-306581,Uid:aabf286bc9c738fac48e9947f3fc0100,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724091662900193981,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.181:8443,kubernetes.io/config.hash: aabf286bc9c738fac48e9947f3fc0100,kubernetes.io/config.seen: 2024-08-19T18:21:02.354619975Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=251dac1b-81ac-4c8d-9224-701e1a309514 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.876951559Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6dbb2bb2-d7a8-42a9-a931-d9808e2057a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.877026456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6dbb2bb2-d7a8-42a9-a931-d9808e2057a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.877223857Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3faf70767cdd17e24d6e1b4db0567223e255470076752063a77581e145a0dc8,PodSandboxId:7a554e7e3cbbc9d3c1415fd8b7008f96a99f5f26366a33f6f069f28f9ffb21c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091964922716454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d63f30-45fd-48f4-973e-6a72cf931b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4022599b0f0e3ce749dd86ffc596158a3475c478feb4f0eb263a491a6b0516da,PodSandboxId:f18d0c432227a7ac31d7293efb3cfa298ee31a157efa522eab7ae37a7c662b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963919466536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-274qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af408da7-683b-4730-b836-a5ae446e84d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc90a845e481d8f633afeb08081ffc98aff79f486ca4da983c0445cb9fe2d7ae,PodSandboxId:3985f838704b1effb6ccc0a0b182f26a9c15766fe86e18f396a0b1a4facc3c03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963633859255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j764j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
26e772d-dd20-4427-b8b2-40422b5be1ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723539f4118447a68b54db513e910e6ab32f3d95e1aed93954cd7d2d773b8d,PodSandboxId:58454aa433bddcf369586bf70d4c6791b7e21f6de548bf04495ae8c717b8fdf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724091963045903192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f004f8f-d49f-468e-acac-a7d691c9cdba,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc556da0574245e8e95d984662de27da7b8afd3e9298b5e7153c0c96fae3de03,PodSandboxId:887216af0d85d3e17893b213189d59f83986e97f76179ea18b889fa795c71d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091952378966636,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 584eb78fa73054250a13e68afac29f82,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d45d5ec1be7cbf5551fde3dc6de4df5580c09d131c4d9259b568cc502ab095,PodSandboxId:fca70f617a3a15b55ccca38784eae2141a94e46e5e3f43598cf737c310746ce5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091952349629182,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94116d3e73bcb15d654acb9df8f832aa6eb95e63312abc9bf6bf8b592d9ce7d7,PodSandboxId:0322d593e3c29deb75ce6ea0cadbae1b701ca4dd7d848e6d9c6f7c6ebc81041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091952282550985,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61941e45b337edba2e6d09e2044800d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd452eae270cdc3d1687ad1a6f93ed1d8d44f3ccba8984d03d1c42608660263c,PodSandboxId:e63256173f447a4709e23d5a577b3383b611e43247b0d254d3e56a92169815a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091952299578879,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef10e3f64821ad739cb86e41c4230360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bcd811e39e2bea53c5abb19fee57148fa6be86be809d6756d052f4f11e29cc7,PodSandboxId:b156d94d8add233e03bc73b725f52da52755d62d3eeea5a87b8e606b0b45125d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091663846249517,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6dbb2bb2-d7a8-42a9-a931-d9808e2057a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.897349588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c4f84db-d383-4ad7-bdc1-dd0d3f71bffb name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.897444197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c4f84db-d383-4ad7-bdc1-dd0d3f71bffb name=/runtime.v1.RuntimeService/Version
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.898731794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62e6e86d-354b-4d19-83cd-12414c5eb75b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.899142187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092896899120854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62e6e86d-354b-4d19-83cd-12414c5eb75b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.899504115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e048294-e6f7-4aeb-8834-220b76ad4b98 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.899578274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e048294-e6f7-4aeb-8834-220b76ad4b98 name=/runtime.v1.RuntimeService/ListContainers
	Aug 19 18:41:36 embed-certs-306581 crio[728]: time="2024-08-19 18:41:36.899931289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3faf70767cdd17e24d6e1b4db0567223e255470076752063a77581e145a0dc8,PodSandboxId:7a554e7e3cbbc9d3c1415fd8b7008f96a99f5f26366a33f6f069f28f9ffb21c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724091964922716454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d63f30-45fd-48f4-973e-6a72cf931b9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4022599b0f0e3ce749dd86ffc596158a3475c478feb4f0eb263a491a6b0516da,PodSandboxId:f18d0c432227a7ac31d7293efb3cfa298ee31a157efa522eab7ae37a7c662b45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963919466536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-274qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af408da7-683b-4730-b836-a5ae446e84d4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc90a845e481d8f633afeb08081ffc98aff79f486ca4da983c0445cb9fe2d7ae,PodSandboxId:3985f838704b1effb6ccc0a0b182f26a9c15766fe86e18f396a0b1a4facc3c03,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724091963633859255,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j764j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
26e772d-dd20-4427-b8b2-40422b5be1ef,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723539f4118447a68b54db513e910e6ab32f3d95e1aed93954cd7d2d773b8d,PodSandboxId:58454aa433bddcf369586bf70d4c6791b7e21f6de548bf04495ae8c717b8fdf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724091963045903192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f004f8f-d49f-468e-acac-a7d691c9cdba,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc556da0574245e8e95d984662de27da7b8afd3e9298b5e7153c0c96fae3de03,PodSandboxId:887216af0d85d3e17893b213189d59f83986e97f76179ea18b889fa795c71d21,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724091952378966636,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 584eb78fa73054250a13e68afac29f82,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5d45d5ec1be7cbf5551fde3dc6de4df5580c09d131c4d9259b568cc502ab095,PodSandboxId:fca70f617a3a15b55ccca38784eae2141a94e46e5e3f43598cf737c310746ce5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724091952349629182,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94116d3e73bcb15d654acb9df8f832aa6eb95e63312abc9bf6bf8b592d9ce7d7,PodSandboxId:0322d593e3c29deb75ce6ea0cadbae1b701ca4dd7d848e6d9c6f7c6ebc81041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724091952282550985,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61941e45b337edba2e6d09e2044800d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd452eae270cdc3d1687ad1a6f93ed1d8d44f3ccba8984d03d1c42608660263c,PodSandboxId:e63256173f447a4709e23d5a577b3383b611e43247b0d254d3e56a92169815a6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724091952299578879,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef10e3f64821ad739cb86e41c4230360,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bcd811e39e2bea53c5abb19fee57148fa6be86be809d6756d052f4f11e29cc7,PodSandboxId:b156d94d8add233e03bc73b725f52da52755d62d3eeea5a87b8e606b0b45125d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724091663846249517,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-306581,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aabf286bc9c738fac48e9947f3fc0100,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e048294-e6f7-4aeb-8834-220b76ad4b98 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a3faf70767cdd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   7a554e7e3cbbc       storage-provisioner
	4022599b0f0e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   f18d0c432227a       coredns-6f6b679f8f-274qq
	bc90a845e481d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   3985f838704b1       coredns-6f6b679f8f-j764j
	29723539f4118       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   15 minutes ago      Running             kube-proxy                0                   58454aa433bdd       kube-proxy-df5kf
	bc556da057424       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   887216af0d85d       etcd-embed-certs-306581
	c5d45d5ec1be7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   15 minutes ago      Running             kube-apiserver            2                   fca70f617a3a1       kube-apiserver-embed-certs-306581
	dd452eae270cd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   15 minutes ago      Running             kube-scheduler            2                   e63256173f447       kube-scheduler-embed-certs-306581
	94116d3e73bcb       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   15 minutes ago      Running             kube-controller-manager   2                   0322d593e3c29       kube-controller-manager-embed-certs-306581
	2bcd811e39e2b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   20 minutes ago      Exited              kube-apiserver            1                   b156d94d8add2       kube-apiserver-embed-certs-306581
	
	
	==> coredns [4022599b0f0e3ce749dd86ffc596158a3475c478feb4f0eb263a491a6b0516da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bc90a845e481d8f633afeb08081ffc98aff79f486ca4da983c0445cb9fe2d7ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-306581
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-306581
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=embed-certs-306581
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_25_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:25:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-306581
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:41:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:41:25 +0000   Mon, 19 Aug 2024 18:25:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:41:25 +0000   Mon, 19 Aug 2024 18:25:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:41:25 +0000   Mon, 19 Aug 2024 18:25:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:41:25 +0000   Mon, 19 Aug 2024 18:25:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.181
	  Hostname:    embed-certs-306581
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c22361cf51d4549af6a9956c518d00d
	  System UUID:                1c22361c-f51d-4549-af6a-9956c518d00d
	  Boot ID:                    c25cae55-8312-4340-b9c6-45c51f945434
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-274qq                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-j764j                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-306581                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-306581             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-306581    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-df5kf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-306581             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-j8qbw               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-306581 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-306581 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-306581 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-306581 event: Registered Node embed-certs-306581 in Controller
	
	
	==> dmesg <==
	[  +0.051072] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038756] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.785033] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.870204] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.509295] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.556848] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.060858] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073459] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.167777] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.137783] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.279785] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +3.957657] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[Aug19 18:21] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +0.062184] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.714338] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.367881] kauditd_printk_skb: 85 callbacks suppressed
	[Aug19 18:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.551820] systemd-fstab-generator[2559]: Ignoring "noauto" option for root device
	[  +4.663905] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.395018] systemd-fstab-generator[2880]: Ignoring "noauto" option for root device
	[Aug19 18:26] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.088043] systemd-fstab-generator[3025]: Ignoring "noauto" option for root device
	[Aug19 18:27] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [bc556da0574245e8e95d984662de27da7b8afd3e9298b5e7153c0c96fae3de03] <==
	{"level":"warn","ts":"2024-08-19T18:33:23.248007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:33:22.896228Z","time spent":"351.766967ms","remote":"127.0.0.1:39584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-19T18:33:23.248637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.521716ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:33:23.248809Z","caller":"traceutil/trace.go:171","msg":"trace[1206272449] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:804; }","duration":"315.748684ms","start":"2024-08-19T18:33:22.933042Z","end":"2024-08-19T18:33:23.248790Z","steps":["trace[1206272449] 'range keys from in-memory index tree'  (duration: 314.303955ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:33:26.735159Z","caller":"traceutil/trace.go:171","msg":"trace[1289184813] transaction","detail":"{read_only:false; response_revision:807; number_of_response:1; }","duration":"221.139756ms","start":"2024-08-19T18:33:26.513996Z","end":"2024-08-19T18:33:26.735136Z","steps":["trace[1289184813] 'process raft request'  (duration: 121.485792ms)","trace[1289184813] 'compare'  (duration: 99.498927ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:33:26.860879Z","caller":"traceutil/trace.go:171","msg":"trace[642430515] transaction","detail":"{read_only:false; response_revision:808; number_of_response:1; }","duration":"119.0432ms","start":"2024-08-19T18:33:26.741819Z","end":"2024-08-19T18:33:26.860862Z","steps":["trace[642430515] 'process raft request'  (duration: 117.206405ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:33:55.309974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.352308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:33:55.310134Z","caller":"traceutil/trace.go:171","msg":"trace[852851575] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:829; }","duration":"413.549255ms","start":"2024-08-19T18:33:54.896565Z","end":"2024-08-19T18:33:55.310114Z","steps":["trace[852851575] 'range keys from in-memory index tree'  (duration: 413.240636ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:33:55.310192Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:33:54.896528Z","time spent":"413.645391ms","remote":"127.0.0.1:39584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-19T18:33:55.310482Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"377.337032ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:33:55.310541Z","caller":"traceutil/trace.go:171","msg":"trace[622923905] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:829; }","duration":"377.413324ms","start":"2024-08-19T18:33:54.933116Z","end":"2024-08-19T18:33:55.310530Z","steps":["trace[622923905] 'range keys from in-memory index tree'  (duration: 377.324383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:33:55.311075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.329428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2024-08-19T18:33:55.311134Z","caller":"traceutil/trace.go:171","msg":"trace[1338518563] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:829; }","duration":"221.391187ms","start":"2024-08-19T18:33:55.089729Z","end":"2024-08-19T18:33:55.311121Z","steps":["trace[1338518563] 'range keys from in-memory index tree'  (duration: 221.142398ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:34:57.939513Z","caller":"traceutil/trace.go:171","msg":"trace[594994971] transaction","detail":"{read_only:false; response_revision:880; number_of_response:1; }","duration":"113.046467ms","start":"2024-08-19T18:34:57.826433Z","end":"2024-08-19T18:34:57.939479Z","steps":["trace[594994971] 'process raft request'  (duration: 112.450689ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:35:26.234021Z","caller":"traceutil/trace.go:171","msg":"trace[181294270] transaction","detail":"{read_only:false; response_revision:901; number_of_response:1; }","duration":"151.583528ms","start":"2024-08-19T18:35:26.082417Z","end":"2024-08-19T18:35:26.234000Z","steps":["trace[181294270] 'process raft request'  (duration: 151.430433ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:35:26.716778Z","caller":"traceutil/trace.go:171","msg":"trace[1808275816] transaction","detail":"{read_only:false; response_revision:902; number_of_response:1; }","duration":"169.860274ms","start":"2024-08-19T18:35:26.546902Z","end":"2024-08-19T18:35:26.716763Z","steps":["trace[1808275816] 'process raft request'  (duration: 72.147444ms)","trace[1808275816] 'compare'  (duration: 97.196917ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T18:35:26.716815Z","caller":"traceutil/trace.go:171","msg":"trace[312439551] transaction","detail":"{read_only:false; response_revision:903; number_of_response:1; }","duration":"169.301211ms","start":"2024-08-19T18:35:26.547497Z","end":"2024-08-19T18:35:26.716799Z","steps":["trace[312439551] 'process raft request'  (duration: 168.951033ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:35:27.046084Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.263587ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T18:35:27.046152Z","caller":"traceutil/trace.go:171","msg":"trace[2068374588] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:903; }","duration":"113.396062ms","start":"2024-08-19T18:35:26.932744Z","end":"2024-08-19T18:35:27.046140Z","steps":["trace[2068374588] 'range keys from in-memory index tree'  (duration: 113.244745ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T18:35:53.485029Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-08-19T18:35:53.493361Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":682,"took":"7.958863ms","hash":594284381,"current-db-size-bytes":2256896,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2256896,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-08-19T18:35:53.493434Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":594284381,"revision":682,"compact-revision":-1}
	{"level":"warn","ts":"2024-08-19T18:36:12.112735Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.995397ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11596928933042207037 > lease_revoke:<id:20f0916be36e38d5>","response":"size:28"}
	{"level":"info","ts":"2024-08-19T18:40:53.493391Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":924}
	{"level":"info","ts":"2024-08-19T18:40:53.497765Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":924,"took":"3.649603ms","hash":2583659589,"current-db-size-bytes":2256896,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1658880,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-19T18:40:53.497889Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2583659589,"revision":924,"compact-revision":682}
	
	
	==> kernel <==
	 18:41:37 up 20 min,  0 users,  load average: 0.44, 0.26, 0.19
	Linux embed-certs-306581 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2bcd811e39e2bea53c5abb19fee57148fa6be86be809d6756d052f4f11e29cc7] <==
	W0819 18:25:44.077467       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.101120       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.143334       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.203187       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.220603       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.262977       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.279429       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.321169       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.339556       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.363783       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.377577       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.426959       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.450512       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:44.488327       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:45.022907       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:47.984070       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:48.400636       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:48.590382       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.005032       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.020647       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.139340       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.170970       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.190647       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.227954       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0819 18:25:49.271615       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c5d45d5ec1be7cbf5551fde3dc6de4df5580c09d131c4d9259b568cc502ab095] <==
	I0819 18:36:55.859632       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:36:55.859714       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:38:55.860104       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:38:55.860199       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 18:38:55.860105       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:38:55.860297       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 18:38:55.861545       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:38:55.861584       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0819 18:40:54.859837       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:40:54.859975       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0819 18:40:55.862091       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:40:55.862141       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0819 18:40:55.862103       1 handler_proxy.go:99] no RequestInfo found in the context
	E0819 18:40:55.862208       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0819 18:40:55.863280       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 18:40:55.863319       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [94116d3e73bcb15d654acb9df8f832aa6eb95e63312abc9bf6bf8b592d9ce7d7] <==
	E0819 18:36:31.925815       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:36:32.438042       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:37:01.933435       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:37:02.387858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="307.354µs"
	I0819 18:37:02.445310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:37:16.381731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="610.991µs"
	E0819 18:37:31.939585       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:37:32.458914       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:38:01.946492       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:38:02.465875       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:38:31.954355       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:38:32.473166       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:39:01.960903       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:39:02.480517       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:39:31.966495       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:39:32.496827       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:40:01.974140       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:40:02.506060       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:40:31.980712       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:40:32.512860       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0819 18:41:01.986899       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:41:02.520226       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0819 18:41:25.901834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-306581"
	E0819 18:41:31.993318       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0819 18:41:32.534893       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [29723539f4118447a68b54db513e910e6ab32f3d95e1aed93954cd7d2d773b8d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0819 18:26:03.471162       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0819 18:26:03.500535       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.181"]
	E0819 18:26:03.500637       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:26:03.641315       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0819 18:26:03.641377       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0819 18:26:03.641405       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:26:03.655482       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:26:03.655769       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:26:03.655793       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:26:03.667408       1 config.go:197] "Starting service config controller"
	I0819 18:26:03.667454       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:26:03.667519       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:26:03.667526       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:26:03.668342       1 config.go:326] "Starting node config controller"
	I0819 18:26:03.668364       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:26:03.768785       1 shared_informer.go:320] Caches are synced for service config
	I0819 18:26:03.768849       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:26:03.768962       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dd452eae270cdc3d1687ad1a6f93ed1d8d44f3ccba8984d03d1c42608660263c] <==
	W0819 18:25:54.907337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 18:25:54.909344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:54.907404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:25:54.909360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:55.846838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:25:55.846888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:55.848325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 18:25:55.848369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:55.880099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 18:25:55.880148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:55.890742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 18:25:55.890787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:55.981892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:25:55.981994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:56.053424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 18:25:56.053544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:56.078368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:25:56.078464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:56.137809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:25:56.137896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:56.148007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:25:56.148147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:25:56.348004       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 18:25:56.348102       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0819 18:25:59.488765       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:40:22 embed-certs-306581 kubelet[2887]: E0819 18:40:22.366486    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:40:27 embed-certs-306581 kubelet[2887]: E0819 18:40:27.668337    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092827668003965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:27 embed-certs-306581 kubelet[2887]: E0819 18:40:27.668605    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092827668003965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:37 embed-certs-306581 kubelet[2887]: E0819 18:40:37.367371    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:40:37 embed-certs-306581 kubelet[2887]: E0819 18:40:37.670389    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092837669998230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:37 embed-certs-306581 kubelet[2887]: E0819 18:40:37.670480    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092837669998230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:47 embed-certs-306581 kubelet[2887]: E0819 18:40:47.672560    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092847672039794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:47 embed-certs-306581 kubelet[2887]: E0819 18:40:47.672908    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092847672039794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:48 embed-certs-306581 kubelet[2887]: E0819 18:40:48.366029    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:40:57 embed-certs-306581 kubelet[2887]: E0819 18:40:57.378729    2887 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 19 18:40:57 embed-certs-306581 kubelet[2887]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 19 18:40:57 embed-certs-306581 kubelet[2887]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 19 18:40:57 embed-certs-306581 kubelet[2887]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 19 18:40:57 embed-certs-306581 kubelet[2887]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 19 18:40:57 embed-certs-306581 kubelet[2887]: E0819 18:40:57.675103    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092857674789211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:57 embed-certs-306581 kubelet[2887]: E0819 18:40:57.675142    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092857674789211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:40:59 embed-certs-306581 kubelet[2887]: E0819 18:40:59.366502    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:41:07 embed-certs-306581 kubelet[2887]: E0819 18:41:07.676902    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092867676541336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:07 embed-certs-306581 kubelet[2887]: E0819 18:41:07.676992    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092867676541336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:14 embed-certs-306581 kubelet[2887]: E0819 18:41:14.366428    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:41:17 embed-certs-306581 kubelet[2887]: E0819 18:41:17.678836    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092877678298815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:17 embed-certs-306581 kubelet[2887]: E0819 18:41:17.679265    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092877678298815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:26 embed-certs-306581 kubelet[2887]: E0819 18:41:26.366862    2887 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-j8qbw" podUID="6c7ec046-01e2-4903-9937-c79aabc81bb2"
	Aug 19 18:41:27 embed-certs-306581 kubelet[2887]: E0819 18:41:27.681547    2887 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092887680975754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:41:27 embed-certs-306581 kubelet[2887]: E0819 18:41:27.681919    2887 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724092887680975754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [a3faf70767cdd17e24d6e1b4db0567223e255470076752063a77581e145a0dc8] <==
	I0819 18:26:05.004304       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:26:05.014324       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:26:05.014540       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:26:05.024230       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:26:05.024396       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-306581_0f3bf2ec-21f3-43f5-92a4-a50b19d57be5!
	I0819 18:26:05.025861       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a48ae1f6-d14d-4f6a-8344-3fcd841841fe", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-306581_0f3bf2ec-21f3-43f5-92a4-a50b19d57be5 became leader
	I0819 18:26:05.126900       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-306581_0f3bf2ec-21f3-43f5-92a4-a50b19d57be5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-306581 -n embed-certs-306581
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-306581 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-j8qbw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-306581 describe pod metrics-server-6867b74b74-j8qbw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-306581 describe pod metrics-server-6867b74b74-j8qbw: exit status 1 (61.730125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-j8qbw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-306581 describe pod metrics-server-6867b74b74-j8qbw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (387.32s)

                                                
                                    

Test pass (252/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 31.26
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 17.41
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 79.13
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 132.43
31 TestAddons/serial/GCPAuth/Namespaces 0.15
33 TestAddons/parallel/Registry 15.63
35 TestAddons/parallel/InspektorGadget 10.97
37 TestAddons/parallel/HelmTiller 12.56
39 TestAddons/parallel/CSI 55.65
40 TestAddons/parallel/Headlamp 18.94
41 TestAddons/parallel/CloudSpanner 5.56
42 TestAddons/parallel/LocalPath 12.26
43 TestAddons/parallel/NvidiaDevicePlugin 6.7
44 TestAddons/parallel/Yakd 12.23
46 TestCertOptions 49.26
47 TestCertExpiration 369.78
49 TestForceSystemdFlag 66.23
50 TestForceSystemdEnv 71.75
52 TestKVMDriverInstallOrUpdate 4.44
56 TestErrorSpam/setup 41.15
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.52
60 TestErrorSpam/unpause 1.61
61 TestErrorSpam/stop 4.77
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 81.65
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.92
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.87
73 TestFunctional/serial/CacheCmd/cache/add_local 2.1
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 29.54
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.29
84 TestFunctional/serial/LogsFileCmd 1.31
85 TestFunctional/serial/InvalidService 4.71
87 TestFunctional/parallel/ConfigCmd 0.32
88 TestFunctional/parallel/DashboardCmd 30.11
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 1.01
95 TestFunctional/parallel/ServiceCmdConnect 11.58
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 46.82
99 TestFunctional/parallel/SSHCmd 0.44
100 TestFunctional/parallel/CpCmd 1.3
101 TestFunctional/parallel/MySQL 21.88
102 TestFunctional/parallel/FileSync 0.19
103 TestFunctional/parallel/CertSync 1.35
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
111 TestFunctional/parallel/License 0.55
121 TestFunctional/parallel/ServiceCmd/DeployApp 11.17
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
123 TestFunctional/parallel/ProfileCmd/profile_list 0.29
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
125 TestFunctional/parallel/MountCmd/any-port 8.55
126 TestFunctional/parallel/MountCmd/specific-port 1.97
127 TestFunctional/parallel/ServiceCmd/List 0.36
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
130 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
131 TestFunctional/parallel/ServiceCmd/Format 0.36
132 TestFunctional/parallel/ServiceCmd/URL 0.32
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
136 TestFunctional/parallel/Version/short 0.04
137 TestFunctional/parallel/Version/components 0.47
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.39
143 TestFunctional/parallel/ImageCommands/Setup 1.79
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.04
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.38
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.81
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.47
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 196.41
158 TestMultiControlPlane/serial/DeployApp 7.02
159 TestMultiControlPlane/serial/PingHostFromPods 1.12
160 TestMultiControlPlane/serial/AddWorkerNode 55.9
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.17
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
169 TestMultiControlPlane/serial/DeleteSecondaryNode 17.32
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 223.52
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 76.88
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 51.7
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.64
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.59
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.65
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 84.18
211 TestMountStart/serial/StartWithMountFirst 25.79
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 28.04
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.68
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 24.11
219 TestMountStart/serial/VerifyMountPostStop 0.36
222 TestMultiNode/serial/FreshStart2Nodes 116.69
223 TestMultiNode/serial/DeployApp2Nodes 5.54
224 TestMultiNode/serial/PingHostFrom2Pods 0.75
225 TestMultiNode/serial/AddNode 52.95
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.96
229 TestMultiNode/serial/StopNode 2.11
230 TestMultiNode/serial/StartAfterStop 39.04
232 TestMultiNode/serial/DeleteNode 2.18
234 TestMultiNode/serial/RestartMultiNode 205.66
235 TestMultiNode/serial/ValidateNameConflict 43.6
242 TestScheduledStopUnix 113.83
246 TestRunningBinaryUpgrade 199.24
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 93.01
253 TestStoppedBinaryUpgrade/Setup 2.29
254 TestStoppedBinaryUpgrade/Upgrade 129.7
255 TestNoKubernetes/serial/StartWithStopK8s 47.69
256 TestNoKubernetes/serial/Start 29.08
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
258 TestNoKubernetes/serial/ProfileList 26.88
259 TestNoKubernetes/serial/Stop 1.29
260 TestNoKubernetes/serial/StartNoArgs 21.14
269 TestPause/serial/Start 96.21
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
279 TestNetworkPlugins/group/false 2.8
283 TestPause/serial/SecondStartNoReconfiguration 75.92
284 TestPause/serial/Pause 0.68
285 TestPause/serial/VerifyStatus 0.24
286 TestPause/serial/Unpause 0.68
287 TestPause/serial/PauseAgain 0.78
288 TestPause/serial/DeletePaused 1.03
289 TestPause/serial/VerifyDeletedResources 0.41
293 TestStartStop/group/no-preload/serial/FirstStart 90.43
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 109.5
296 TestStartStop/group/no-preload/serial/DeployApp 10.28
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.91
299 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
300 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
303 TestStartStop/group/newest-cni/serial/FirstStart 42.93
305 TestStartStop/group/no-preload/serial/SecondStart 683.04
306 TestStartStop/group/newest-cni/serial/DeployApp 0
307 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
308 TestStartStop/group/newest-cni/serial/Stop 10.49
311 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
312 TestStartStop/group/newest-cni/serial/SecondStart 286.97
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 558.36
315 TestStartStop/group/old-k8s-version/serial/Stop 2.27
316 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
318 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
319 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
320 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
321 TestStartStop/group/newest-cni/serial/Pause 2.26
323 TestStartStop/group/embed-certs/serial/FirstStart 93.96
324 TestStartStop/group/embed-certs/serial/DeployApp 10.29
325 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
328 TestStartStop/group/embed-certs/serial/SecondStart 615.8
336 TestNetworkPlugins/group/auto/Start 83.14
337 TestNetworkPlugins/group/kindnet/Start 82.54
338 TestNetworkPlugins/group/calico/Start 95.95
339 TestNetworkPlugins/group/auto/KubeletFlags 0.21
340 TestNetworkPlugins/group/auto/NetCatPod 11.25
341 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
342 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
343 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
344 TestNetworkPlugins/group/auto/DNS 0.17
345 TestNetworkPlugins/group/auto/Localhost 0.13
346 TestNetworkPlugins/group/auto/HairPin 0.14
347 TestNetworkPlugins/group/kindnet/DNS 0.2
348 TestNetworkPlugins/group/kindnet/Localhost 0.17
349 TestNetworkPlugins/group/kindnet/HairPin 0.16
350 TestNetworkPlugins/group/custom-flannel/Start 75.79
351 TestNetworkPlugins/group/enable-default-cni/Start 75.99
352 TestNetworkPlugins/group/calico/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/KubeletFlags 0.19
354 TestNetworkPlugins/group/calico/NetCatPod 11.24
355 TestNetworkPlugins/group/calico/DNS 0.18
356 TestNetworkPlugins/group/calico/Localhost 0.13
357 TestNetworkPlugins/group/calico/HairPin 0.15
358 TestNetworkPlugins/group/flannel/Start 87.6
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
362 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
363 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.26
364 TestNetworkPlugins/group/custom-flannel/DNS 0.2
365 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
366 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
367 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
368 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
369 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
370 TestNetworkPlugins/group/bridge/Start 84.53
371 TestNetworkPlugins/group/flannel/ControllerPod 6.01
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
373 TestNetworkPlugins/group/flannel/NetCatPod 10.21
374 TestNetworkPlugins/group/flannel/DNS 0.16
375 TestNetworkPlugins/group/flannel/Localhost 0.14
376 TestNetworkPlugins/group/flannel/HairPin 0.13
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
378 TestNetworkPlugins/group/bridge/NetCatPod 11.22
379 TestNetworkPlugins/group/bridge/DNS 0.16
380 TestNetworkPlugins/group/bridge/Localhost 0.13
381 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (31.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-258496 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-258496 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (31.255768906s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (31.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-258496
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-258496: exit status 85 (53.387077ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-258496 | jenkins | v1.33.1 | 19 Aug 24 16:52 UTC |          |
	|         | -p download-only-258496        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 16:52:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 16:52:18.369235   17849 out.go:345] Setting OutFile to fd 1 ...
	I0819 16:52:18.369477   17849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 16:52:18.369485   17849 out.go:358] Setting ErrFile to fd 2...
	I0819 16:52:18.369489   17849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 16:52:18.369644   17849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	W0819 16:52:18.369755   17849 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19478-10654/.minikube/config/config.json: open /home/jenkins/minikube-integration/19478-10654/.minikube/config/config.json: no such file or directory
	I0819 16:52:18.370313   17849 out.go:352] Setting JSON to true
	I0819 16:52:18.371231   17849 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2083,"bootTime":1724084255,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 16:52:18.371289   17849 start.go:139] virtualization: kvm guest
	I0819 16:52:18.373742   17849 out.go:97] [download-only-258496] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 16:52:18.373908   17849 notify.go:220] Checking for updates...
	W0819 16:52:18.373937   17849 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 16:52:18.375391   17849 out.go:169] MINIKUBE_LOCATION=19478
	I0819 16:52:18.376890   17849 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 16:52:18.378249   17849 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 16:52:18.379699   17849 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 16:52:18.381055   17849 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 16:52:18.383662   17849 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 16:52:18.383967   17849 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 16:52:18.486889   17849 out.go:97] Using the kvm2 driver based on user configuration
	I0819 16:52:18.486923   17849 start.go:297] selected driver: kvm2
	I0819 16:52:18.486940   17849 start.go:901] validating driver "kvm2" against <nil>
	I0819 16:52:18.487253   17849 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 16:52:18.487396   17849 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 16:52:18.501950   17849 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 16:52:18.501995   17849 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 16:52:18.502472   17849 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0819 16:52:18.502627   17849 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 16:52:18.502695   17849 cni.go:84] Creating CNI manager for ""
	I0819 16:52:18.502711   17849 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 16:52:18.502722   17849 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 16:52:18.502782   17849 start.go:340] cluster config:
	{Name:download-only-258496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-258496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 16:52:18.502970   17849 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 16:52:18.504763   17849 out.go:97] Downloading VM boot image ...
	I0819 16:52:18.504799   17849 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19478-10654/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0819 16:52:33.949885   17849 out.go:97] Starting "download-only-258496" primary control-plane node in "download-only-258496" cluster
	I0819 16:52:33.949910   17849 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 16:52:34.055261   17849 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 16:52:34.055298   17849 cache.go:56] Caching tarball of preloaded images
	I0819 16:52:34.055464   17849 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 16:52:34.057082   17849 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 16:52:34.057114   17849 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 16:52:34.162019   17849 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 16:52:47.939070   17849 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 16:52:47.939182   17849 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-258496 host does not exist
	  To start a cluster, run: "minikube start -p download-only-258496"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-258496
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (17.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-444293 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-444293 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.409435013s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (17.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-444293
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-444293: exit status 85 (56.096076ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-258496 | jenkins | v1.33.1 | 19 Aug 24 16:52 UTC |                     |
	|         | -p download-only-258496        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 16:52 UTC | 19 Aug 24 16:52 UTC |
	| delete  | -p download-only-258496        | download-only-258496 | jenkins | v1.33.1 | 19 Aug 24 16:52 UTC | 19 Aug 24 16:52 UTC |
	| start   | -o=json --download-only        | download-only-444293 | jenkins | v1.33.1 | 19 Aug 24 16:52 UTC |                     |
	|         | -p download-only-444293        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 16:52:49
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 16:52:49.931676   18145 out.go:345] Setting OutFile to fd 1 ...
	I0819 16:52:49.931917   18145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 16:52:49.931926   18145 out.go:358] Setting ErrFile to fd 2...
	I0819 16:52:49.931930   18145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 16:52:49.932110   18145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 16:52:49.932650   18145 out.go:352] Setting JSON to true
	I0819 16:52:49.933523   18145 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2115,"bootTime":1724084255,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 16:52:49.933584   18145 start.go:139] virtualization: kvm guest
	I0819 16:52:49.935623   18145 out.go:97] [download-only-444293] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 16:52:49.935752   18145 notify.go:220] Checking for updates...
	I0819 16:52:49.937018   18145 out.go:169] MINIKUBE_LOCATION=19478
	I0819 16:52:49.938479   18145 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 16:52:49.939766   18145 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 16:52:49.941194   18145 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 16:52:49.942423   18145 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 16:52:49.944743   18145 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 16:52:49.944975   18145 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 16:52:49.975925   18145 out.go:97] Using the kvm2 driver based on user configuration
	I0819 16:52:49.975969   18145 start.go:297] selected driver: kvm2
	I0819 16:52:49.975981   18145 start.go:901] validating driver "kvm2" against <nil>
	I0819 16:52:49.976305   18145 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 16:52:49.976394   18145 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19478-10654/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0819 16:52:49.990862   18145 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0819 16:52:49.990911   18145 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 16:52:49.991438   18145 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0819 16:52:49.991575   18145 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 16:52:49.991633   18145 cni.go:84] Creating CNI manager for ""
	I0819 16:52:49.991644   18145 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0819 16:52:49.991653   18145 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 16:52:49.991706   18145 start.go:340] cluster config:
	{Name:download-only-444293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-444293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 16:52:49.991792   18145 iso.go:125] acquiring lock: {Name:mkbdaeb8bfbcf2c6f3294821d44b96d88299985a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 16:52:49.993455   18145 out.go:97] Starting "download-only-444293" primary control-plane node in "download-only-444293" cluster
	I0819 16:52:49.993472   18145 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 16:52:50.494811   18145 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 16:52:50.494844   18145 cache.go:56] Caching tarball of preloaded images
	I0819 16:52:50.495004   18145 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 16:52:50.496770   18145 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 16:52:50.496788   18145 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 16:52:50.595100   18145 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19478-10654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-444293 host does not exist
	  To start a cluster, run: "minikube start -p download-only-444293"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-444293
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-174718 --alsologtostderr --binary-mirror http://127.0.0.1:38627 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-174718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-174718
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (79.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-359699 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-359699 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.114605005s)
helpers_test.go:175: Cleaning up "offline-crio-359699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-359699
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-359699: (1.019397401s)
--- PASS: TestOffline (79.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-825243
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-825243: exit status 85 (55.815641ms)

                                                
                                                
-- stdout --
	* Profile "addons-825243" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-825243"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-825243
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-825243: exit status 85 (55.346692ms)

                                                
                                                
-- stdout --
	* Profile "addons-825243" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-825243"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (132.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-825243 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-825243 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m12.430209859s)
--- PASS: TestAddons/Setup (132.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-825243 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-825243 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.533316ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-4g2dz" [eda791b5-556d-4ac5-b370-ea875a1d634a] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003192306s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-s2gcq" [59c4a419-cfc5-4b2f-964c-8a0b25b0d01c] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004324573s
addons_test.go:342: (dbg) Run:  kubectl --context addons-825243 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-825243 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-825243 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.705401667s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 ip
2024/08/19 16:56:06 [DEBUG] GET http://192.168.39.129:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.63s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fcssc" [9e94b2c1-5b87-4ea3-8844-804ff175e68d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004198096s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-825243
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-825243: (5.96824108s)
--- PASS: TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.56s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.175444ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-wr8hg" [f1ed9b9d-e3d1-4e09-b94f-f29a67830f09] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003610803s
addons_test.go:475: (dbg) Run:  kubectl --context addons-825243 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-825243 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.174152274s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-linux-amd64 -p addons-825243 addons disable helm-tiller --alsologtostderr -v=1: (1.383188475s)
--- PASS: TestAddons/parallel/HelmTiller (12.56s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.008283ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-825243 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-825243 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1d67a8e0-3406-4563-9672-b5715b330a72] Pending
helpers_test.go:344: "task-pv-pod" [1d67a8e0-3406-4563-9672-b5715b330a72] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1d67a8e0-3406-4563-9672-b5715b330a72] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.00353134s
addons_test.go:590: (dbg) Run:  kubectl --context addons-825243 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-825243 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-825243 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-825243 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-825243 delete pod task-pv-pod: (1.209066094s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-825243 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-825243 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-825243 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ddcf94a4-07b5-4c33-8eb7-4c50f1fb27cc] Pending
helpers_test.go:344: "task-pv-pod-restore" [ddcf94a4-07b5-4c33-8eb7-4c50f1fb27cc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ddcf94a4-07b5-4c33-8eb7-4c50f1fb27cc] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003834813s
addons_test.go:632: (dbg) Run:  kubectl --context addons-825243 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-825243 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-825243 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-825243 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.7213641s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.65s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-825243 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-825243 --alsologtostderr -v=1: (1.236908299s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-nkjph" [b8826978-03c5-4030-b566-b8775dd88f94] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-nkjph" [b8826978-03c5-4030-b566-b8775dd88f94] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-nkjph" [b8826978-03c5-4030-b566-b8775dd88f94] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004159388s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-825243 addons disable headlamp --alsologtostderr -v=1: (5.694871325s)
--- PASS: TestAddons/parallel/Headlamp (18.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-zfz2w" [63010d73-d8f2-4bab-80eb-27c64ae1f8cb] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003905s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-825243
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-825243 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-825243 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825243 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [45f98488-b8cc-4663-8e81-6f3f25623821] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [45f98488-b8cc-4663-8e81-6f3f25623821] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [45f98488-b8cc-4663-8e81-6f3f25623821] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004044348s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-825243 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 ssh "cat /opt/local-path-provisioner/pvc-63640194-31bc-4782-b58f-2706becef52c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-825243 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-825243 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vcml2" [8b9d9981-f3de-4307-9e9f-2ee8621a11c8] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004906124s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-825243
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-fhbcm" [23632270-5695-4349-9c89-13574ba3821d] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.008232698s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-825243 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-825243 addons disable yakd --alsologtostderr -v=1: (6.222861808s)
--- PASS: TestAddons/parallel/Yakd (12.23s)

                                                
                                    
x
+
TestCertOptions (49.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-948260 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-948260 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (48.08688152s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-948260 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-948260 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-948260 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-948260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-948260
--- PASS: TestCertOptions (49.26s)

                                                
                                    
x
+
TestCertExpiration (369.78s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-975771 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-975771 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m10.129548179s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-975771 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-975771 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m58.680096296s)
helpers_test.go:175: Cleaning up "cert-expiration-975771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-975771
--- PASS: TestCertExpiration (369.78s)

                                                
                                    
x
+
TestForceSystemdFlag (66.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-170488 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-170488 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m5.23414909s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-170488 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-170488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-170488
--- PASS: TestForceSystemdFlag (66.23s)

                                                
                                    
x
+
TestForceSystemdEnv (71.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-380066 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-380066 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.735671571s)
helpers_test.go:175: Cleaning up "force-systemd-env-380066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-380066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-380066: (1.009838869s)
--- PASS: TestForceSystemdEnv (71.75s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.44s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.44s)

                                                
                                    
x
+
TestErrorSpam/setup (41.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-109866 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-109866 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-109866 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-109866 --driver=kvm2  --container-runtime=crio: (41.153225153s)
--- PASS: TestErrorSpam/setup (41.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 unpause
E0819 17:05:21.263216   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:05:21.270280   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:05:21.281637   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:05:21.303035   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:05:21.344457   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:05:21.425974   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 unpause
E0819 17:05:21.587326   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:05:21.909446   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 unpause
E0819 17:05:22.551114   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (4.77s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 stop
E0819 17:05:23.832886   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 stop: (1.618995247s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 stop: (1.310640351s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 stop
E0819 17:05:26.394932   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-109866 --log_dir /tmp/nospam-109866 stop: (1.839065584s)
--- PASS: TestErrorSpam/stop (4.77s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19478-10654/.minikube/files/etc/test/nested/copy/17837/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-632788 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0819 17:05:31.517079   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:05:41.758429   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:06:02.240234   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:06:43.203219   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-632788 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m21.649411817s)
--- PASS: TestFunctional/serial/StartWithProxy (81.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-632788 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-632788 --alsologtostderr -v=8: (40.917502831s)
functional_test.go:663: soft start took 40.918088535s for "functional-632788" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-632788 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 cache add registry.k8s.io/pause:3.1: (1.272634583s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 cache add registry.k8s.io/pause:3.3: (1.319118443s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 cache add registry.k8s.io/pause:latest: (1.281738085s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-632788 /tmp/TestFunctionalserialCacheCmdcacheadd_local3968971738/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 cache add minikube-local-cache-test:functional-632788
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 cache add minikube-local-cache-test:functional-632788: (1.780551213s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 cache delete minikube-local-cache-test:functional-632788
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-632788
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-632788 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (205.794537ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 cache reload: (1.028213704s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 kubectl -- --context functional-632788 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-632788 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-632788 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0819 17:08:05.124645   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-632788 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.535985865s)
functional_test.go:761: restart took 29.536096304s for "functional-632788" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-632788 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 logs: (1.289529288s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 logs --file /tmp/TestFunctionalserialLogsFileCmd512643870/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 logs --file /tmp/TestFunctionalserialLogsFileCmd512643870/001/logs.txt: (1.310423338s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-632788 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-632788
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-632788: exit status 115 (273.161082ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.66:32394 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-632788 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-632788 delete -f testdata/invalidsvc.yaml: (1.251705831s)
--- PASS: TestFunctional/serial/InvalidService (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-632788 config get cpus: exit status 14 (50.730481ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-632788 config get cpus: exit status 14 (52.476616ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-632788 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-632788 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 27158: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-632788 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-632788 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.205012ms)

                                                
                                                
-- stdout --
	* [functional-632788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:08:28.778760   26727 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:08:28.779174   26727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:08:28.779189   26727 out.go:358] Setting ErrFile to fd 2...
	I0819 17:08:28.779196   26727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:08:28.779612   26727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:08:28.780659   26727 out.go:352] Setting JSON to false
	I0819 17:08:28.781724   26727 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3054,"bootTime":1724084255,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:08:28.781783   26727 start.go:139] virtualization: kvm guest
	I0819 17:08:28.783466   26727 out.go:177] * [functional-632788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:08:28.785074   26727 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:08:28.785088   26727 notify.go:220] Checking for updates...
	I0819 17:08:28.787917   26727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:08:28.789389   26727 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:08:28.790622   26727 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:08:28.791822   26727 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:08:28.792935   26727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:08:28.794403   26727 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:08:28.794778   26727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:08:28.794814   26727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:08:28.810998   26727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0819 17:08:28.811394   26727 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:08:28.811889   26727 main.go:141] libmachine: Using API Version  1
	I0819 17:08:28.811907   26727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:08:28.812261   26727 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:08:28.812462   26727 main.go:141] libmachine: (functional-632788) Calling .DriverName
	I0819 17:08:28.812725   26727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:08:28.813148   26727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:08:28.813193   26727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:08:28.829283   26727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43829
	I0819 17:08:28.829835   26727 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:08:28.830548   26727 main.go:141] libmachine: Using API Version  1
	I0819 17:08:28.830583   26727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:08:28.830945   26727 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:08:28.831118   26727 main.go:141] libmachine: (functional-632788) Calling .DriverName
	I0819 17:08:28.869522   26727 out.go:177] * Using the kvm2 driver based on existing profile
	I0819 17:08:28.870776   26727 start.go:297] selected driver: kvm2
	I0819 17:08:28.870804   26727 start.go:901] validating driver "kvm2" against &{Name:functional-632788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-632788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.66 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:08:28.870941   26727 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:08:28.872871   26727 out.go:201] 
	W0819 17:08:28.874299   26727 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 17:08:28.875431   26727 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-632788 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-632788 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-632788 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.76683ms)

                                                
                                                
-- stdout --
	* [functional-632788] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:08:28.648029   26694 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:08:28.648171   26694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:08:28.648184   26694 out.go:358] Setting ErrFile to fd 2...
	I0819 17:08:28.648192   26694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:08:28.648632   26694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:08:28.649385   26694 out.go:352] Setting JSON to false
	I0819 17:08:28.650725   26694 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3054,"bootTime":1724084255,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:08:28.650807   26694 start.go:139] virtualization: kvm guest
	I0819 17:08:28.652846   26694 out.go:177] * [functional-632788] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0819 17:08:28.654098   26694 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:08:28.654191   26694 notify.go:220] Checking for updates...
	I0819 17:08:28.656928   26694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:08:28.658507   26694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:08:28.659814   26694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:08:28.661078   26694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:08:28.662250   26694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:08:28.663774   26694 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:08:28.664166   26694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:08:28.664219   26694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:08:28.680316   26694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43123
	I0819 17:08:28.680800   26694 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:08:28.681431   26694 main.go:141] libmachine: Using API Version  1
	I0819 17:08:28.681452   26694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:08:28.681731   26694 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:08:28.681902   26694 main.go:141] libmachine: (functional-632788) Calling .DriverName
	I0819 17:08:28.682141   26694 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:08:28.682456   26694 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:08:28.682494   26694 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:08:28.697096   26694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34293
	I0819 17:08:28.697548   26694 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:08:28.698006   26694 main.go:141] libmachine: Using API Version  1
	I0819 17:08:28.698027   26694 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:08:28.698350   26694 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:08:28.698517   26694 main.go:141] libmachine: (functional-632788) Calling .DriverName
	I0819 17:08:28.729795   26694 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0819 17:08:28.730901   26694 start.go:297] selected driver: kvm2
	I0819 17:08:28.730920   26694 start.go:901] validating driver "kvm2" against &{Name:functional-632788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-632788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.66 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:08:28.731022   26694 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:08:28.733015   26694 out.go:201] 
	W0819 17:08:28.734075   26694 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 17:08:28.735252   26694 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-632788 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-632788 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-g66f6" [0a0a1526-9923-47f0-8d2e-6451d13c0196] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-g66f6" [0a0a1526-9923-47f0-8d2e-6451d13c0196] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.016134911s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.66:30719
functional_test.go:1675: http://192.168.39.66:30719: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-g66f6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.66:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.66:30719
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a6d26662-e469-4fe6-86e1-26d5740c1a9c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004829308s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-632788 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-632788 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-632788 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-632788 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1e3c4b1e-87cd-439a-9609-cc633daace31] Pending
helpers_test.go:344: "sp-pod" [1e3c4b1e-87cd-439a-9609-cc633daace31] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1e3c4b1e-87cd-439a-9609-cc633daace31] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003809207s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-632788 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-632788 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-632788 delete -f testdata/storage-provisioner/pod.yaml: (2.994738896s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-632788 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9a0a2ab1-e292-4419-8602-25634b8436a5] Pending
helpers_test.go:344: "sp-pod" [9a0a2ab1-e292-4419-8602-25634b8436a5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9a0a2ab1-e292-4419-8602-25634b8436a5] Running
2024/08/19 17:08:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.003620509s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-632788 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh -n functional-632788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 cp functional-632788:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd931200983/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh -n functional-632788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh -n functional-632788 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-632788 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-jtntz" [e34b5de4-e807-47bb-94a3-a280b82c6eee] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-jtntz" [e34b5de4-e807-47bb-94a3-a280b82c6eee] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003963054s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-632788 exec mysql-6cdb49bbb-jtntz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-632788 exec mysql-6cdb49bbb-jtntz -- mysql -ppassword -e "show databases;": exit status 1 (190.827077ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-632788 exec mysql-6cdb49bbb-jtntz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/17837/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo cat /etc/test/nested/copy/17837/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/17837.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo cat /etc/ssl/certs/17837.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/17837.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo cat /usr/share/ca-certificates/17837.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/178372.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo cat /etc/ssl/certs/178372.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/178372.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo cat /usr/share/ca-certificates/178372.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-632788 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-632788 ssh "sudo systemctl is-active docker": exit status 1 (219.052038ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-632788 ssh "sudo systemctl is-active containerd": exit status 1 (215.236886ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-632788 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-632788 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-2klx9" [97977231-32c7-4ba5-b84c-76f24cc080f7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-2klx9" [97977231-32c7-4ba5-b84c-76f24cc080f7] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003108889s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "240.951026ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.224085ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "243.983521ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.28715ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdany-port2472354801/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724087297715700605" to /tmp/TestFunctionalparallelMountCmdany-port2472354801/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724087297715700605" to /tmp/TestFunctionalparallelMountCmdany-port2472354801/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724087297715700605" to /tmp/TestFunctionalparallelMountCmdany-port2472354801/001/test-1724087297715700605
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (242.626403ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 17:08 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 17:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 17:08 test-1724087297715700605
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh cat /mount-9p/test-1724087297715700605
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-632788 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7ac677ae-3492-46c6-a0a3-8bd5e11a270e] Pending
helpers_test.go:344: "busybox-mount" [7ac677ae-3492-46c6-a0a3-8bd5e11a270e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7ac677ae-3492-46c6-a0a3-8bd5e11a270e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7ac677ae-3492-46c6-a0a3-8bd5e11a270e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004227616s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-632788 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdany-port2472354801/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdspecific-port3985443906/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.73385ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdspecific-port3985443906/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-632788 ssh "sudo umount -f /mount-9p": exit status 1 (209.211131ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-632788 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdspecific-port3985443906/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 service list -o json
functional_test.go:1494: Took "360.233844ms" to run "out/minikube-linux-amd64 -p functional-632788 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.66:32472
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3127343352/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3127343352/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3127343352/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T" /mount1: exit status 1 (323.718589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-632788 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3127343352/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3127343352/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-632788 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3127343352/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.66:32472
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-632788 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-632788
localhost/kicbase/echo-server:functional-632788
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-632788 image ls --format short --alsologtostderr:
I0819 17:08:46.060994   27769 out.go:345] Setting OutFile to fd 1 ...
I0819 17:08:46.061240   27769 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 17:08:46.061249   27769 out.go:358] Setting ErrFile to fd 2...
I0819 17:08:46.061264   27769 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 17:08:46.061448   27769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
I0819 17:08:46.062164   27769 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 17:08:46.062308   27769 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 17:08:46.062713   27769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 17:08:46.062754   27769 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 17:08:46.077431   27769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
I0819 17:08:46.077959   27769 main.go:141] libmachine: () Calling .GetVersion
I0819 17:08:46.078546   27769 main.go:141] libmachine: Using API Version  1
I0819 17:08:46.078573   27769 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 17:08:46.078905   27769 main.go:141] libmachine: () Calling .GetMachineName
I0819 17:08:46.079091   27769 main.go:141] libmachine: (functional-632788) Calling .GetState
I0819 17:08:46.080885   27769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 17:08:46.080923   27769 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 17:08:46.095045   27769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
I0819 17:08:46.095446   27769 main.go:141] libmachine: () Calling .GetVersion
I0819 17:08:46.095863   27769 main.go:141] libmachine: Using API Version  1
I0819 17:08:46.095880   27769 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 17:08:46.096174   27769 main.go:141] libmachine: () Calling .GetMachineName
I0819 17:08:46.096357   27769 main.go:141] libmachine: (functional-632788) Calling .DriverName
I0819 17:08:46.096570   27769 ssh_runner.go:195] Run: systemctl --version
I0819 17:08:46.096600   27769 main.go:141] libmachine: (functional-632788) Calling .GetSSHHostname
I0819 17:08:46.099282   27769 main.go:141] libmachine: (functional-632788) DBG | domain functional-632788 has defined MAC address 52:54:00:b9:f3:c8 in network mk-functional-632788
I0819 17:08:46.099688   27769 main.go:141] libmachine: (functional-632788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f3:c8", ip: ""} in network mk-functional-632788: {Iface:virbr1 ExpiryTime:2024-08-19 18:05:41 +0000 UTC Type:0 Mac:52:54:00:b9:f3:c8 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:functional-632788 Clientid:01:52:54:00:b9:f3:c8}
I0819 17:08:46.099715   27769 main.go:141] libmachine: (functional-632788) DBG | domain functional-632788 has defined IP address 192.168.39.66 and MAC address 52:54:00:b9:f3:c8 in network mk-functional-632788
I0819 17:08:46.099825   27769 main.go:141] libmachine: (functional-632788) Calling .GetSSHPort
I0819 17:08:46.099976   27769 main.go:141] libmachine: (functional-632788) Calling .GetSSHKeyPath
I0819 17:08:46.100116   27769 main.go:141] libmachine: (functional-632788) Calling .GetSSHUsername
I0819 17:08:46.100232   27769 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/functional-632788/id_rsa Username:docker}
I0819 17:08:46.187773   27769 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 17:08:46.220319   27769 main.go:141] libmachine: Making call to close driver server
I0819 17:08:46.220337   27769 main.go:141] libmachine: (functional-632788) Calling .Close
I0819 17:08:46.220604   27769 main.go:141] libmachine: Successfully made call to close driver server
I0819 17:08:46.220624   27769 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 17:08:46.220634   27769 main.go:141] libmachine: Making call to close driver server
I0819 17:08:46.220642   27769 main.go:141] libmachine: (functional-632788) Calling .Close
I0819 17:08:46.220855   27769 main.go:141] libmachine: Successfully made call to close driver server
I0819 17:08:46.220878   27769 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 17:08:46.220884   27769 main.go:141] libmachine: (functional-632788) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-632788 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-632788  | caa2c14248bcb | 3.33kB |
| localhost/my-image                      | functional-632788  | 05662ce9bf390 | 1.47MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/kicbase/echo-server           | functional-632788  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-632788 image ls --format table --alsologtostderr:
I0819 17:08:50.229910   27934 out.go:345] Setting OutFile to fd 1 ...
I0819 17:08:50.230161   27934 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 17:08:50.230170   27934 out.go:358] Setting ErrFile to fd 2...
I0819 17:08:50.230176   27934 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 17:08:50.230362   27934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
I0819 17:08:50.230884   27934 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 17:08:50.230994   27934 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 17:08:50.231349   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 17:08:50.231400   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 17:08:50.246105   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33165
I0819 17:08:50.246474   27934 main.go:141] libmachine: () Calling .GetVersion
I0819 17:08:50.246975   27934 main.go:141] libmachine: Using API Version  1
I0819 17:08:50.246997   27934 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 17:08:50.247379   27934 main.go:141] libmachine: () Calling .GetMachineName
I0819 17:08:50.247609   27934 main.go:141] libmachine: (functional-632788) Calling .GetState
I0819 17:08:50.249341   27934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 17:08:50.249380   27934 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 17:08:50.263453   27934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34199
I0819 17:08:50.264004   27934 main.go:141] libmachine: () Calling .GetVersion
I0819 17:08:50.264462   27934 main.go:141] libmachine: Using API Version  1
I0819 17:08:50.264483   27934 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 17:08:50.264804   27934 main.go:141] libmachine: () Calling .GetMachineName
I0819 17:08:50.264985   27934 main.go:141] libmachine: (functional-632788) Calling .DriverName
I0819 17:08:50.265167   27934 ssh_runner.go:195] Run: systemctl --version
I0819 17:08:50.265192   27934 main.go:141] libmachine: (functional-632788) Calling .GetSSHHostname
I0819 17:08:50.267944   27934 main.go:141] libmachine: (functional-632788) DBG | domain functional-632788 has defined MAC address 52:54:00:b9:f3:c8 in network mk-functional-632788
I0819 17:08:50.268370   27934 main.go:141] libmachine: (functional-632788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f3:c8", ip: ""} in network mk-functional-632788: {Iface:virbr1 ExpiryTime:2024-08-19 18:05:41 +0000 UTC Type:0 Mac:52:54:00:b9:f3:c8 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:functional-632788 Clientid:01:52:54:00:b9:f3:c8}
I0819 17:08:50.268405   27934 main.go:141] libmachine: (functional-632788) DBG | domain functional-632788 has defined IP address 192.168.39.66 and MAC address 52:54:00:b9:f3:c8 in network mk-functional-632788
I0819 17:08:50.268577   27934 main.go:141] libmachine: (functional-632788) Calling .GetSSHPort
I0819 17:08:50.268766   27934 main.go:141] libmachine: (functional-632788) Calling .GetSSHKeyPath
I0819 17:08:50.268922   27934 main.go:141] libmachine: (functional-632788) Calling .GetSSHUsername
I0819 17:08:50.269058   27934 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/functional-632788/id_rsa Username:docker}
I0819 17:08:50.400060   27934 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 17:08:50.454834   27934 main.go:141] libmachine: Making call to close driver server
I0819 17:08:50.454854   27934 main.go:141] libmachine: (functional-632788) Calling .Close
I0819 17:08:50.455115   27934 main.go:141] libmachine: Successfully made call to close driver server
I0819 17:08:50.455132   27934 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 17:08:50.455141   27934 main.go:141] libmachine: Making call to close driver server
I0819 17:08:50.455143   27934 main.go:141] libmachine: (functional-632788) DBG | Closing plugin on server side
I0819 17:08:50.455149   27934 main.go:141] libmachine: (functional-632788) Calling .Close
I0819 17:08:50.455370   27934 main.go:141] libmachine: Successfully made call to close driver server
I0819 17:08:50.455382   27934 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 17:08:50.455420   27934 main.go:141] libmachine: (functional-632788) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-632788 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"604f5db92eaa823d11c141d8825f14
60206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{
"id":"caa2c14248bcb0a1cfbdab3ea2adb4518735d9ec4e288733de28c961bd7bb451","repoDigests":["localhost/minikube-local-cache-test@sha256:63f3da6760d33acbccafdac860fbad5d076c9b88f415c6989c47e1f831b8722a"],"repoTags":["localhost/minikube-local-cache-test:functional-632788"],"size":"3330"},{"id":"05662ce9bf39064ab5b6799784a5cbdf8eadb508a555713a9589c2f4cd3e567f","repoDigests":["localhost/my-image@sha256:0bfd04129d4779666439b2f138458989b2d415652351b1af1bab9fbfb0ba3c0c"],"repoTags":["localhost/my-image:functional-632788"],"size":"1468598"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha25
6:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-632788"],"size":"4943877"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c1055059473
34539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c
6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"58063736ea0f018df3c052b7d82f80fbbd14a75df170045c09c317d5415016cf","repoDigests":["docker.io/library/c5304237ce8a9c95a381c4e7abf6adc0b1fe855aa63cedefb32c5f1f8deb3af8-tmp@sha256:d72751c38c52a1d972f1ae919819376f9a6ae9c939bd91ec4811440782cf1bf7"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b3
6e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.
1"],"size":"746911"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-632788 image ls --format json --alsologtostderr:
I0819 17:08:49.973558   27911 out.go:345] Setting OutFile to fd 1 ...
I0819 17:08:49.973788   27911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 17:08:49.973803   27911 out.go:358] Setting ErrFile to fd 2...
I0819 17:08:49.973808   27911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 17:08:49.973967   27911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
I0819 17:08:49.974503   27911 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 17:08:49.974596   27911 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 17:08:49.974946   27911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 17:08:49.974982   27911 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 17:08:49.989443   27911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
I0819 17:08:49.989924   27911 main.go:141] libmachine: () Calling .GetVersion
I0819 17:08:49.990482   27911 main.go:141] libmachine: Using API Version  1
I0819 17:08:49.990501   27911 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 17:08:49.990842   27911 main.go:141] libmachine: () Calling .GetMachineName
I0819 17:08:49.991027   27911 main.go:141] libmachine: (functional-632788) Calling .GetState
I0819 17:08:49.992645   27911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 17:08:49.992678   27911 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 17:08:50.012730   27911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34759
I0819 17:08:50.013163   27911 main.go:141] libmachine: () Calling .GetVersion
I0819 17:08:50.013691   27911 main.go:141] libmachine: Using API Version  1
I0819 17:08:50.013715   27911 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 17:08:50.014041   27911 main.go:141] libmachine: () Calling .GetMachineName
I0819 17:08:50.014240   27911 main.go:141] libmachine: (functional-632788) Calling .DriverName
I0819 17:08:50.014428   27911 ssh_runner.go:195] Run: systemctl --version
I0819 17:08:50.014447   27911 main.go:141] libmachine: (functional-632788) Calling .GetSSHHostname
I0819 17:08:50.017344   27911 main.go:141] libmachine: (functional-632788) DBG | domain functional-632788 has defined MAC address 52:54:00:b9:f3:c8 in network mk-functional-632788
I0819 17:08:50.017726   27911 main.go:141] libmachine: (functional-632788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f3:c8", ip: ""} in network mk-functional-632788: {Iface:virbr1 ExpiryTime:2024-08-19 18:05:41 +0000 UTC Type:0 Mac:52:54:00:b9:f3:c8 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:functional-632788 Clientid:01:52:54:00:b9:f3:c8}
I0819 17:08:50.017750   27911 main.go:141] libmachine: (functional-632788) DBG | domain functional-632788 has defined IP address 192.168.39.66 and MAC address 52:54:00:b9:f3:c8 in network mk-functional-632788
I0819 17:08:50.017929   27911 main.go:141] libmachine: (functional-632788) Calling .GetSSHPort
I0819 17:08:50.018097   27911 main.go:141] libmachine: (functional-632788) Calling .GetSSHKeyPath
I0819 17:08:50.018237   27911 main.go:141] libmachine: (functional-632788) Calling .GetSSHUsername
I0819 17:08:50.018409   27911 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/functional-632788/id_rsa Username:docker}
I0819 17:08:50.131188   27911 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 17:08:50.181385   27911 main.go:141] libmachine: Making call to close driver server
I0819 17:08:50.181408   27911 main.go:141] libmachine: (functional-632788) Calling .Close
I0819 17:08:50.181691   27911 main.go:141] libmachine: Successfully made call to close driver server
I0819 17:08:50.181712   27911 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 17:08:50.181721   27911 main.go:141] libmachine: Making call to close driver server
I0819 17:08:50.181731   27911 main.go:141] libmachine: (functional-632788) Calling .Close
I0819 17:08:50.182054   27911 main.go:141] libmachine: (functional-632788) DBG | Closing plugin on server side
I0819 17:08:50.182128   27911 main.go:141] libmachine: Successfully made call to close driver server
I0819 17:08:50.182159   27911 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-632788 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-632788
size: "4943877"
- id: caa2c14248bcb0a1cfbdab3ea2adb4518735d9ec4e288733de28c961bd7bb451
repoDigests:
- localhost/minikube-local-cache-test@sha256:63f3da6760d33acbccafdac860fbad5d076c9b88f415c6989c47e1f831b8722a
repoTags:
- localhost/minikube-local-cache-test:functional-632788
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-632788 image ls --format yaml --alsologtostderr:
I0819 17:08:46.268011   27794 out.go:345] Setting OutFile to fd 1 ...
I0819 17:08:46.268275   27794 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 17:08:46.268284   27794 out.go:358] Setting ErrFile to fd 2...
I0819 17:08:46.268288   27794 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 17:08:46.268611   27794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
I0819 17:08:46.269205   27794 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 17:08:46.269339   27794 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 17:08:46.269805   27794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 17:08:46.269859   27794 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 17:08:46.284410   27794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
I0819 17:08:46.284952   27794 main.go:141] libmachine: () Calling .GetVersion
I0819 17:08:46.285511   27794 main.go:141] libmachine: Using API Version  1
I0819 17:08:46.285533   27794 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 17:08:46.285918   27794 main.go:141] libmachine: () Calling .GetMachineName
I0819 17:08:46.286118   27794 main.go:141] libmachine: (functional-632788) Calling .GetState
I0819 17:08:46.288165   27794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 17:08:46.288215   27794 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 17:08:46.305822   27794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32823
I0819 17:08:46.306244   27794 main.go:141] libmachine: () Calling .GetVersion
I0819 17:08:46.306714   27794 main.go:141] libmachine: Using API Version  1
I0819 17:08:46.306737   27794 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 17:08:46.307050   27794 main.go:141] libmachine: () Calling .GetMachineName
I0819 17:08:46.307247   27794 main.go:141] libmachine: (functional-632788) Calling .DriverName
I0819 17:08:46.307468   27794 ssh_runner.go:195] Run: systemctl --version
I0819 17:08:46.307490   27794 main.go:141] libmachine: (functional-632788) Calling .GetSSHHostname
I0819 17:08:46.310384   27794 main.go:141] libmachine: (functional-632788) DBG | domain functional-632788 has defined MAC address 52:54:00:b9:f3:c8 in network mk-functional-632788
I0819 17:08:46.310823   27794 main.go:141] libmachine: (functional-632788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f3:c8", ip: ""} in network mk-functional-632788: {Iface:virbr1 ExpiryTime:2024-08-19 18:05:41 +0000 UTC Type:0 Mac:52:54:00:b9:f3:c8 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:functional-632788 Clientid:01:52:54:00:b9:f3:c8}
I0819 17:08:46.310854   27794 main.go:141] libmachine: (functional-632788) DBG | domain functional-632788 has defined IP address 192.168.39.66 and MAC address 52:54:00:b9:f3:c8 in network mk-functional-632788
I0819 17:08:46.310992   27794 main.go:141] libmachine: (functional-632788) Calling .GetSSHPort
I0819 17:08:46.311179   27794 main.go:141] libmachine: (functional-632788) Calling .GetSSHKeyPath
I0819 17:08:46.311338   27794 main.go:141] libmachine: (functional-632788) Calling .GetSSHUsername
I0819 17:08:46.311506   27794 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/functional-632788/id_rsa Username:docker}
I0819 17:08:46.409367   27794 ssh_runner.go:195] Run: sudo crictl images --output json
I0819 17:08:46.538842   27794 main.go:141] libmachine: Making call to close driver server
I0819 17:08:46.538857   27794 main.go:141] libmachine: (functional-632788) Calling .Close
I0819 17:08:46.539138   27794 main.go:141] libmachine: Successfully made call to close driver server
I0819 17:08:46.539157   27794 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 17:08:46.539168   27794 main.go:141] libmachine: Making call to close driver server
I0819 17:08:46.539178   27794 main.go:141] libmachine: (functional-632788) Calling .Close
I0819 17:08:46.539397   27794 main.go:141] libmachine: Successfully made call to close driver server
I0819 17:08:46.539421   27794 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 17:08:46.539449   27794 main.go:141] libmachine: (functional-632788) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-632788 ssh pgrep buildkitd: exit status 1 (226.986223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image build -t localhost/my-image:functional-632788 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 image build -t localhost/my-image:functional-632788 testdata/build --alsologtostderr: (2.933479168s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-632788 image build -t localhost/my-image:functional-632788 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 58063736ea0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-632788
--> 05662ce9bf3
Successfully tagged localhost/my-image:functional-632788
05662ce9bf39064ab5b6799784a5cbdf8eadb508a555713a9589c2f4cd3e567f
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-632788 image build -t localhost/my-image:functional-632788 testdata/build --alsologtostderr:
I0819 17:08:46.810432   27863 out.go:345] Setting OutFile to fd 1 ...
I0819 17:08:46.810711   27863 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 17:08:46.810719   27863 out.go:358] Setting ErrFile to fd 2...
I0819 17:08:46.810724   27863 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 17:08:46.810889   27863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
I0819 17:08:46.811460   27863 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 17:08:46.811964   27863 config.go:182] Loaded profile config "functional-632788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 17:08:46.812322   27863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 17:08:46.812361   27863 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 17:08:46.826746   27863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
I0819 17:08:46.827209   27863 main.go:141] libmachine: () Calling .GetVersion
I0819 17:08:46.827745   27863 main.go:141] libmachine: Using API Version  1
I0819 17:08:46.827769   27863 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 17:08:46.828094   27863 main.go:141] libmachine: () Calling .GetMachineName
I0819 17:08:46.828280   27863 main.go:141] libmachine: (functional-632788) Calling .GetState
I0819 17:08:46.830188   27863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0819 17:08:46.830229   27863 main.go:141] libmachine: Launching plugin server for driver kvm2
I0819 17:08:46.844307   27863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45107
I0819 17:08:46.844721   27863 main.go:141] libmachine: () Calling .GetVersion
I0819 17:08:46.845172   27863 main.go:141] libmachine: Using API Version  1
I0819 17:08:46.845195   27863 main.go:141] libmachine: () Calling .SetConfigRaw
I0819 17:08:46.845463   27863 main.go:141] libmachine: () Calling .GetMachineName
I0819 17:08:46.845606   27863 main.go:141] libmachine: (functional-632788) Calling .DriverName
I0819 17:08:46.845794   27863 ssh_runner.go:195] Run: systemctl --version
I0819 17:08:46.845819   27863 main.go:141] libmachine: (functional-632788) Calling .GetSSHHostname
I0819 17:08:46.848330   27863 main.go:141] libmachine: (functional-632788) DBG | domain functional-632788 has defined MAC address 52:54:00:b9:f3:c8 in network mk-functional-632788
I0819 17:08:46.848697   27863 main.go:141] libmachine: (functional-632788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f3:c8", ip: ""} in network mk-functional-632788: {Iface:virbr1 ExpiryTime:2024-08-19 18:05:41 +0000 UTC Type:0 Mac:52:54:00:b9:f3:c8 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:functional-632788 Clientid:01:52:54:00:b9:f3:c8}
I0819 17:08:46.848722   27863 main.go:141] libmachine: (functional-632788) DBG | domain functional-632788 has defined IP address 192.168.39.66 and MAC address 52:54:00:b9:f3:c8 in network mk-functional-632788
I0819 17:08:46.848886   27863 main.go:141] libmachine: (functional-632788) Calling .GetSSHPort
I0819 17:08:46.849052   27863 main.go:141] libmachine: (functional-632788) Calling .GetSSHKeyPath
I0819 17:08:46.849189   27863 main.go:141] libmachine: (functional-632788) Calling .GetSSHUsername
I0819 17:08:46.849337   27863 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/functional-632788/id_rsa Username:docker}
I0819 17:08:46.940249   27863 build_images.go:161] Building image from path: /tmp/build.209528811.tar
I0819 17:08:46.940324   27863 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 17:08:46.951100   27863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.209528811.tar
I0819 17:08:46.958454   27863 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.209528811.tar: stat -c "%s %y" /var/lib/minikube/build/build.209528811.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.209528811.tar': No such file or directory
I0819 17:08:46.958482   27863 ssh_runner.go:362] scp /tmp/build.209528811.tar --> /var/lib/minikube/build/build.209528811.tar (3072 bytes)
I0819 17:08:46.991317   27863 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.209528811
I0819 17:08:47.001943   27863 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.209528811 -xf /var/lib/minikube/build/build.209528811.tar
I0819 17:08:47.010893   27863 crio.go:315] Building image: /var/lib/minikube/build/build.209528811
I0819 17:08:47.010950   27863 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-632788 /var/lib/minikube/build/build.209528811 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0819 17:08:49.675227   27863 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-632788 /var/lib/minikube/build/build.209528811 --cgroup-manager=cgroupfs: (2.664250991s)
I0819 17:08:49.675292   27863 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.209528811
I0819 17:08:49.686132   27863 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.209528811.tar
I0819 17:08:49.698973   27863 build_images.go:217] Built localhost/my-image:functional-632788 from /tmp/build.209528811.tar
I0819 17:08:49.699010   27863 build_images.go:133] succeeded building to: functional-632788
I0819 17:08:49.699017   27863 build_images.go:134] failed building to: 
I0819 17:08:49.699041   27863 main.go:141] libmachine: Making call to close driver server
I0819 17:08:49.699051   27863 main.go:141] libmachine: (functional-632788) Calling .Close
I0819 17:08:49.699344   27863 main.go:141] libmachine: Successfully made call to close driver server
I0819 17:08:49.699367   27863 main.go:141] libmachine: Making call to close connection to plugin binary
I0819 17:08:49.699379   27863 main.go:141] libmachine: Making call to close driver server
I0819 17:08:49.699389   27863 main.go:141] libmachine: (functional-632788) Calling .Close
I0819 17:08:49.699609   27863 main.go:141] libmachine: Successfully made call to close driver server
I0819 17:08:49.699624   27863 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.77215937s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-632788
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image load --daemon kicbase/echo-server:functional-632788 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 image load --daemon kicbase/echo-server:functional-632788 --alsologtostderr: (1.11910741s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image load --daemon kicbase/echo-server:functional-632788 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-632788
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image load --daemon kicbase/echo-server:functional-632788 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image save kicbase/echo-server:functional-632788 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 image save kicbase/echo-server:functional-632788 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.375073671s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image rm kicbase/echo-server:functional-632788 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-632788 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.221290213s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-632788
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-632788 image save --daemon kicbase/echo-server:functional-632788 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-632788
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-632788
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-632788
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-632788
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-227346 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 17:10:21.263512   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:10:48.966192   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-227346 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m15.785956563s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (196.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-227346 -- rollout status deployment/busybox: (5.038006728s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-cvdvs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-dncbb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-k75xm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-cvdvs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-dncbb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-k75xm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-cvdvs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-dncbb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-k75xm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-cvdvs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-cvdvs -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-dncbb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-dncbb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-k75xm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-227346 -- exec busybox-7dff88458-k75xm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-227346 -v=7 --alsologtostderr
E0819 17:13:15.961161   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:13:15.967555   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:13:15.978909   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:13:16.000286   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:13:16.041695   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:13:16.123156   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:13:16.284685   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:13:16.606504   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:13:17.248565   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:13:18.530198   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:13:21.091469   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-227346 -v=7 --alsologtostderr: (55.099894727s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-227346 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp testdata/cp-test.txt ha-227346:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile955442382/001/cp-test_ha-227346.txt
E0819 17:13:26.212839   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346:/home/docker/cp-test.txt ha-227346-m02:/home/docker/cp-test_ha-227346_ha-227346-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m02 "sudo cat /home/docker/cp-test_ha-227346_ha-227346-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346:/home/docker/cp-test.txt ha-227346-m03:/home/docker/cp-test_ha-227346_ha-227346-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m03 "sudo cat /home/docker/cp-test_ha-227346_ha-227346-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346:/home/docker/cp-test.txt ha-227346-m04:/home/docker/cp-test_ha-227346_ha-227346-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m04 "sudo cat /home/docker/cp-test_ha-227346_ha-227346-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp testdata/cp-test.txt ha-227346-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile955442382/001/cp-test_ha-227346-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m02:/home/docker/cp-test.txt ha-227346:/home/docker/cp-test_ha-227346-m02_ha-227346.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346 "sudo cat /home/docker/cp-test_ha-227346-m02_ha-227346.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m02:/home/docker/cp-test.txt ha-227346-m03:/home/docker/cp-test_ha-227346-m02_ha-227346-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m03 "sudo cat /home/docker/cp-test_ha-227346-m02_ha-227346-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m02:/home/docker/cp-test.txt ha-227346-m04:/home/docker/cp-test_ha-227346-m02_ha-227346-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m04 "sudo cat /home/docker/cp-test_ha-227346-m02_ha-227346-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp testdata/cp-test.txt ha-227346-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile955442382/001/cp-test_ha-227346-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt ha-227346:/home/docker/cp-test_ha-227346-m03_ha-227346.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346 "sudo cat /home/docker/cp-test_ha-227346-m03_ha-227346.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt ha-227346-m02:/home/docker/cp-test_ha-227346-m03_ha-227346-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m02 "sudo cat /home/docker/cp-test_ha-227346-m03_ha-227346-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m03:/home/docker/cp-test.txt ha-227346-m04:/home/docker/cp-test_ha-227346-m03_ha-227346-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m04 "sudo cat /home/docker/cp-test_ha-227346-m03_ha-227346-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp testdata/cp-test.txt ha-227346-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile955442382/001/cp-test_ha-227346-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt ha-227346:/home/docker/cp-test_ha-227346-m04_ha-227346.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346 "sudo cat /home/docker/cp-test_ha-227346-m04_ha-227346.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt ha-227346-m02:/home/docker/cp-test_ha-227346-m04_ha-227346-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m02 "sudo cat /home/docker/cp-test_ha-227346-m04_ha-227346-m02.txt"
E0819 17:13:36.455228   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 cp ha-227346-m04:/home/docker/cp-test.txt ha-227346-m03:/home/docker/cp-test_ha-227346-m04_ha-227346-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 ssh -n ha-227346-m03 "sudo cat /home/docker/cp-test_ha-227346-m04_ha-227346-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0819 17:15:59.821035   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.469676303s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-227346 node delete m03 -v=7 --alsologtostderr: (16.588603638s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (223.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-227346 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 17:25:21.263135   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:28:15.961591   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-227346 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m42.774547905s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (223.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-227346 --control-plane -v=7 --alsologtostderr
E0819 17:29:39.025105   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-227346 --control-plane -v=7 --alsologtostderr: (1m16.063699034s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-227346 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-100541 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0819 17:30:21.263157   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-100541 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (51.702186838s)
--- PASS: TestJSONOutput/start/Command (51.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-100541 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-100541 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.65s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-100541 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-100541 --output=json --user=testUser: (6.648677088s)
--- PASS: TestJSONOutput/stop/Command (6.65s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-265286 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-265286 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.712154ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6a512fbc-c4e4-4875-892b-f34f86f953e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-265286] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b5c82f62-0268-4266-be2e-8583daf0f0dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19478"}}
	{"specversion":"1.0","id":"b4f1d4fb-9c8c-44bd-9b16-3c56ff93859d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"518d4073-e0b0-4aa2-bfff-4983f49abfb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig"}}
	{"specversion":"1.0","id":"b4672fa6-7a60-4458-87e7-bd92f994f6e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube"}}
	{"specversion":"1.0","id":"75ca6661-112d-4346-80cc-f2f9ad6b7b03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"466894c0-5bff-41a9-b2f8-d32552a905b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3417fe7e-cb4b-4f49-b7ea-706d0ce6d638","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-265286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-265286
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-152034 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-152034 --driver=kvm2  --container-runtime=crio: (43.024587662s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-154536 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-154536 --driver=kvm2  --container-runtime=crio: (38.590025933s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-152034
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-154536
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-154536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-154536
helpers_test.go:175: Cleaning up "first-152034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-152034
--- PASS: TestMinikubeProfile (84.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-859225 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-859225 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.789716144s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-859225 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-859225 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-874173 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-874173 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.040374232s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-874173 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-874173 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-859225 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-874173 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-874173 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-874173
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-874173: (1.271545817s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-874173
E0819 17:33:15.960956   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-874173: (23.107899512s)
--- PASS: TestMountStart/serial/RestartStopped (24.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-874173 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-874173 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-188752 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 17:35:21.263081   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-188752 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.294634729s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-188752 -- rollout status deployment/busybox: (4.112214054s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- exec busybox-7dff88458-2f5fw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- exec busybox-7dff88458-vxmhm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- exec busybox-7dff88458-2f5fw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- exec busybox-7dff88458-vxmhm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- exec busybox-7dff88458-2f5fw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- exec busybox-7dff88458-vxmhm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- exec busybox-7dff88458-2f5fw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- exec busybox-7dff88458-2f5fw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- exec busybox-7dff88458-vxmhm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-188752 -- exec busybox-7dff88458-vxmhm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-188752 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-188752 -v 3 --alsologtostderr: (52.407807903s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-188752 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp testdata/cp-test.txt multinode-188752:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp multinode-188752:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2485370709/001/cp-test_multinode-188752.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp multinode-188752:/home/docker/cp-test.txt multinode-188752-m02:/home/docker/cp-test_multinode-188752_multinode-188752-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m02 "sudo cat /home/docker/cp-test_multinode-188752_multinode-188752-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp multinode-188752:/home/docker/cp-test.txt multinode-188752-m03:/home/docker/cp-test_multinode-188752_multinode-188752-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m03 "sudo cat /home/docker/cp-test_multinode-188752_multinode-188752-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp testdata/cp-test.txt multinode-188752-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp multinode-188752-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2485370709/001/cp-test_multinode-188752-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp multinode-188752-m02:/home/docker/cp-test.txt multinode-188752:/home/docker/cp-test_multinode-188752-m02_multinode-188752.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752 "sudo cat /home/docker/cp-test_multinode-188752-m02_multinode-188752.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp multinode-188752-m02:/home/docker/cp-test.txt multinode-188752-m03:/home/docker/cp-test_multinode-188752-m02_multinode-188752-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m03 "sudo cat /home/docker/cp-test_multinode-188752-m02_multinode-188752-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp testdata/cp-test.txt multinode-188752-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp multinode-188752-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2485370709/001/cp-test_multinode-188752-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp multinode-188752-m03:/home/docker/cp-test.txt multinode-188752:/home/docker/cp-test_multinode-188752-m03_multinode-188752.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752 "sudo cat /home/docker/cp-test_multinode-188752-m03_multinode-188752.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 cp multinode-188752-m03:/home/docker/cp-test.txt multinode-188752-m02:/home/docker/cp-test_multinode-188752-m03_multinode-188752-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 ssh -n multinode-188752-m02 "sudo cat /home/docker/cp-test_multinode-188752-m03_multinode-188752-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-188752 node stop m03: (1.290767093s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-188752 status: exit status 7 (408.814639ms)

                                                
                                                
-- stdout --
	multinode-188752
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-188752-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-188752-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-188752 status --alsologtostderr: exit status 7 (407.406396ms)

                                                
                                                
-- stdout --
	multinode-188752
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-188752-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-188752-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:36:40.703858   44897 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:36:40.703957   44897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:36:40.703970   44897 out.go:358] Setting ErrFile to fd 2...
	I0819 17:36:40.703975   44897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:36:40.704144   44897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:36:40.704299   44897 out.go:352] Setting JSON to false
	I0819 17:36:40.704323   44897 mustload.go:65] Loading cluster: multinode-188752
	I0819 17:36:40.704375   44897 notify.go:220] Checking for updates...
	I0819 17:36:40.704692   44897 config.go:182] Loaded profile config "multinode-188752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:36:40.704705   44897 status.go:255] checking status of multinode-188752 ...
	I0819 17:36:40.705102   44897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:36:40.705199   44897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:36:40.724423   44897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41077
	I0819 17:36:40.724808   44897 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:36:40.725346   44897 main.go:141] libmachine: Using API Version  1
	I0819 17:36:40.725369   44897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:36:40.725729   44897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:36:40.725886   44897 main.go:141] libmachine: (multinode-188752) Calling .GetState
	I0819 17:36:40.727376   44897 status.go:330] multinode-188752 host status = "Running" (err=<nil>)
	I0819 17:36:40.727395   44897 host.go:66] Checking if "multinode-188752" exists ...
	I0819 17:36:40.727664   44897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:36:40.727698   44897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:36:40.742424   44897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42937
	I0819 17:36:40.742798   44897 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:36:40.743238   44897 main.go:141] libmachine: Using API Version  1
	I0819 17:36:40.743255   44897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:36:40.743565   44897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:36:40.743750   44897 main.go:141] libmachine: (multinode-188752) Calling .GetIP
	I0819 17:36:40.746596   44897 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:36:40.747003   44897 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:36:40.747032   44897 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:36:40.747181   44897 host.go:66] Checking if "multinode-188752" exists ...
	I0819 17:36:40.747617   44897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:36:40.747677   44897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:36:40.762694   44897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I0819 17:36:40.763045   44897 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:36:40.763496   44897 main.go:141] libmachine: Using API Version  1
	I0819 17:36:40.763509   44897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:36:40.763790   44897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:36:40.763984   44897 main.go:141] libmachine: (multinode-188752) Calling .DriverName
	I0819 17:36:40.764197   44897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:36:40.764228   44897 main.go:141] libmachine: (multinode-188752) Calling .GetSSHHostname
	I0819 17:36:40.766707   44897 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:36:40.767050   44897 main.go:141] libmachine: (multinode-188752) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:26:cf", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:33:49 +0000 UTC Type:0 Mac:52:54:00:98:26:cf Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-188752 Clientid:01:52:54:00:98:26:cf}
	I0819 17:36:40.767081   44897 main.go:141] libmachine: (multinode-188752) DBG | domain multinode-188752 has defined IP address 192.168.39.69 and MAC address 52:54:00:98:26:cf in network mk-multinode-188752
	I0819 17:36:40.767202   44897 main.go:141] libmachine: (multinode-188752) Calling .GetSSHPort
	I0819 17:36:40.767354   44897 main.go:141] libmachine: (multinode-188752) Calling .GetSSHKeyPath
	I0819 17:36:40.767623   44897 main.go:141] libmachine: (multinode-188752) Calling .GetSSHUsername
	I0819 17:36:40.767754   44897 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/multinode-188752/id_rsa Username:docker}
	I0819 17:36:40.847812   44897 ssh_runner.go:195] Run: systemctl --version
	I0819 17:36:40.853638   44897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:36:40.866618   44897 kubeconfig.go:125] found "multinode-188752" server: "https://192.168.39.69:8443"
	I0819 17:36:40.866650   44897 api_server.go:166] Checking apiserver status ...
	I0819 17:36:40.866680   44897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:36:40.879637   44897 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1038/cgroup
	W0819 17:36:40.889367   44897 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1038/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0819 17:36:40.889438   44897 ssh_runner.go:195] Run: ls
	I0819 17:36:40.893605   44897 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I0819 17:36:40.898215   44897 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I0819 17:36:40.898235   44897 status.go:422] multinode-188752 apiserver status = Running (err=<nil>)
	I0819 17:36:40.898245   44897 status.go:257] multinode-188752 status: &{Name:multinode-188752 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:36:40.898284   44897 status.go:255] checking status of multinode-188752-m02 ...
	I0819 17:36:40.898572   44897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:36:40.898606   44897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:36:40.913512   44897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36247
	I0819 17:36:40.913873   44897 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:36:40.914352   44897 main.go:141] libmachine: Using API Version  1
	I0819 17:36:40.914374   44897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:36:40.914674   44897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:36:40.914840   44897 main.go:141] libmachine: (multinode-188752-m02) Calling .GetState
	I0819 17:36:40.916351   44897 status.go:330] multinode-188752-m02 host status = "Running" (err=<nil>)
	I0819 17:36:40.916366   44897 host.go:66] Checking if "multinode-188752-m02" exists ...
	I0819 17:36:40.916725   44897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:36:40.916780   44897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:36:40.931945   44897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I0819 17:36:40.932315   44897 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:36:40.932827   44897 main.go:141] libmachine: Using API Version  1
	I0819 17:36:40.932856   44897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:36:40.933149   44897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:36:40.933321   44897 main.go:141] libmachine: (multinode-188752-m02) Calling .GetIP
	I0819 17:36:40.936059   44897 main.go:141] libmachine: (multinode-188752-m02) DBG | domain multinode-188752-m02 has defined MAC address 52:54:00:56:43:70 in network mk-multinode-188752
	I0819 17:36:40.936477   44897 main.go:141] libmachine: (multinode-188752-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:43:70", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:34:53 +0000 UTC Type:0 Mac:52:54:00:56:43:70 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-188752-m02 Clientid:01:52:54:00:56:43:70}
	I0819 17:36:40.936499   44897 main.go:141] libmachine: (multinode-188752-m02) DBG | domain multinode-188752-m02 has defined IP address 192.168.39.123 and MAC address 52:54:00:56:43:70 in network mk-multinode-188752
	I0819 17:36:40.936668   44897 host.go:66] Checking if "multinode-188752-m02" exists ...
	I0819 17:36:40.937004   44897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:36:40.937036   44897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:36:40.951250   44897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36529
	I0819 17:36:40.951764   44897 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:36:40.952206   44897 main.go:141] libmachine: Using API Version  1
	I0819 17:36:40.952227   44897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:36:40.952534   44897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:36:40.952693   44897 main.go:141] libmachine: (multinode-188752-m02) Calling .DriverName
	I0819 17:36:40.952853   44897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:36:40.952873   44897 main.go:141] libmachine: (multinode-188752-m02) Calling .GetSSHHostname
	I0819 17:36:40.955209   44897 main.go:141] libmachine: (multinode-188752-m02) DBG | domain multinode-188752-m02 has defined MAC address 52:54:00:56:43:70 in network mk-multinode-188752
	I0819 17:36:40.955595   44897 main.go:141] libmachine: (multinode-188752-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:43:70", ip: ""} in network mk-multinode-188752: {Iface:virbr1 ExpiryTime:2024-08-19 18:34:53 +0000 UTC Type:0 Mac:52:54:00:56:43:70 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-188752-m02 Clientid:01:52:54:00:56:43:70}
	I0819 17:36:40.955627   44897 main.go:141] libmachine: (multinode-188752-m02) DBG | domain multinode-188752-m02 has defined IP address 192.168.39.123 and MAC address 52:54:00:56:43:70 in network mk-multinode-188752
	I0819 17:36:40.955760   44897 main.go:141] libmachine: (multinode-188752-m02) Calling .GetSSHPort
	I0819 17:36:40.955904   44897 main.go:141] libmachine: (multinode-188752-m02) Calling .GetSSHKeyPath
	I0819 17:36:40.956050   44897 main.go:141] libmachine: (multinode-188752-m02) Calling .GetSSHUsername
	I0819 17:36:40.956160   44897 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19478-10654/.minikube/machines/multinode-188752-m02/id_rsa Username:docker}
	I0819 17:36:41.039109   44897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:36:41.051882   44897 status.go:257] multinode-188752-m02 status: &{Name:multinode-188752-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 17:36:41.051909   44897 status.go:255] checking status of multinode-188752-m03 ...
	I0819 17:36:41.052276   44897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0819 17:36:41.052318   44897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0819 17:36:41.067501   44897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42223
	I0819 17:36:41.067929   44897 main.go:141] libmachine: () Calling .GetVersion
	I0819 17:36:41.068392   44897 main.go:141] libmachine: Using API Version  1
	I0819 17:36:41.068412   44897 main.go:141] libmachine: () Calling .SetConfigRaw
	I0819 17:36:41.068786   44897 main.go:141] libmachine: () Calling .GetMachineName
	I0819 17:36:41.068988   44897 main.go:141] libmachine: (multinode-188752-m03) Calling .GetState
	I0819 17:36:41.070790   44897 status.go:330] multinode-188752-m03 host status = "Stopped" (err=<nil>)
	I0819 17:36:41.070807   44897 status.go:343] host is not running, skipping remaining checks
	I0819 17:36:41.070813   44897 status.go:257] multinode-188752-m03 status: &{Name:multinode-188752-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-188752 node start m03 -v=7 --alsologtostderr: (38.434196913s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-188752 node delete m03: (1.670597178s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (205.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-188752 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0819 17:45:21.263758   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:46:19.026778   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 17:48:15.961812   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-188752 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m25.143145815s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-188752 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (205.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-188752
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-188752-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-188752-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.276571ms)

                                                
                                                
-- stdout --
	* [multinode-188752-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-188752-m02' is duplicated with machine name 'multinode-188752-m02' in profile 'multinode-188752'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-188752-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-188752-m03 --driver=kvm2  --container-runtime=crio: (42.5329199s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-188752
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-188752: exit status 80 (205.002261ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-188752 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-188752-m03 already exists in multinode-188752-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-188752-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.60s)

                                                
                                    
x
+
TestScheduledStopUnix (113.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-770588 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-770588 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.303479729s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-770588 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-770588 -n scheduled-stop-770588
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-770588 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-770588 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-770588 -n scheduled-stop-770588
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-770588
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-770588 --schedule 15s
E0819 17:55:04.335773   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0819 17:55:21.263556   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-770588
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-770588: exit status 7 (62.416113ms)

                                                
                                                
-- stdout --
	scheduled-stop-770588
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-770588 -n scheduled-stop-770588
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-770588 -n scheduled-stop-770588: exit status 7 (63.787813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-770588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-770588
--- PASS: TestScheduledStopUnix (113.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (199.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1534762615 start -p running-upgrade-608764 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1534762615 start -p running-upgrade-608764 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m3.751881144s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-608764 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-608764 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.858555167s)
helpers_test.go:175: Cleaning up "running-upgrade-608764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-608764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-608764: (1.209530097s)
--- PASS: TestRunningBinaryUpgrade (199.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-411119 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-411119 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.030218ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-411119] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-411119 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-411119 --driver=kvm2  --container-runtime=crio: (1m32.762462364s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-411119 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (129.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3290660455 start -p stopped-upgrade-679112 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3290660455 start -p stopped-upgrade-679112 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m27.964129556s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3290660455 -p stopped-upgrade-679112 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3290660455 -p stopped-upgrade-679112 stop: (2.122932356s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-679112 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-679112 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.611264356s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (129.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (47.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-411119 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-411119 --no-kubernetes --driver=kvm2  --container-runtime=crio: (46.555080436s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-411119 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-411119 status -o json: exit status 2 (234.058675ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-411119","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-411119
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (47.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-411119 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0819 17:58:15.960978   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-411119 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.082035272s)
--- PASS: TestNoKubernetes/serial/Start (29.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-411119 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-411119 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.873711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (26.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.380527602s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (11.49774007s)
--- PASS: TestNoKubernetes/serial/ProfileList (26.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-411119
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-411119: (1.291626804s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-411119 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-411119 --driver=kvm2  --container-runtime=crio: (21.138874325s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.14s)

                                                
                                    
x
+
TestPause/serial/Start (96.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-164373 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-164373 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m36.211970673s)
--- PASS: TestPause/serial/Start (96.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-679112
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-411119 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-411119 "sudo systemctl is-active --quiet service kubelet": exit status 1 (189.175336ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-321572 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-321572 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (95.347228ms)

                                                
                                                
-- stdout --
	* [false-321572] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 17:59:30.548473   55814 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:59:30.548577   55814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:59:30.548586   55814 out.go:358] Setting ErrFile to fd 2...
	I0819 17:59:30.548590   55814 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:59:30.548806   55814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-10654/.minikube/bin
	I0819 17:59:30.549350   55814 out.go:352] Setting JSON to false
	I0819 17:59:30.550335   55814 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6116,"bootTime":1724084255,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:59:30.550393   55814 start.go:139] virtualization: kvm guest
	I0819 17:59:30.552317   55814 out.go:177] * [false-321572] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:59:30.553471   55814 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:59:30.553455   55814 notify.go:220] Checking for updates...
	I0819 17:59:30.555802   55814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:59:30.557051   55814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-10654/kubeconfig
	I0819 17:59:30.558347   55814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-10654/.minikube
	I0819 17:59:30.559543   55814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:59:30.560596   55814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:59:30.562003   55814 config.go:182] Loaded profile config "force-systemd-env-380066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:59:30.562085   55814 config.go:182] Loaded profile config "kubernetes-upgrade-415209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0819 17:59:30.562164   55814 config.go:182] Loaded profile config "pause-164373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:59:30.562239   55814 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:59:30.597237   55814 out.go:177] * Using the kvm2 driver based on user configuration
	I0819 17:59:30.598496   55814 start.go:297] selected driver: kvm2
	I0819 17:59:30.598520   55814 start.go:901] validating driver "kvm2" against <nil>
	I0819 17:59:30.598534   55814 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:59:30.600519   55814 out.go:201] 
	W0819 17:59:30.601799   55814 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0819 17:59:30.602897   55814 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-321572 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-321572" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-321572

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-321572"

                                                
                                                
----------------------- debugLogs end: false-321572 [took: 2.562913245s] --------------------------------
helpers_test.go:175: Cleaning up "false-321572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-321572
--- PASS: TestNetworkPlugins/group/false (2.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (75.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-164373 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-164373 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.900045781s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (75.92s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-164373 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-164373 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-164373 --output=json --layout=cluster: exit status 2 (236.972958ms)

                                                
                                                
-- stdout --
	{"Name":"pause-164373","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-164373","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-164373 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-164373 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-164373 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-164373 --alsologtostderr -v=5: (1.028808237s)
--- PASS: TestPause/serial/DeletePaused (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-233969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-233969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m30.429008109s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (109.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-813424 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 18:02:59.028897   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:03:15.960934   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-813424 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m49.500659691s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (109.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-233969 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e5a2d1eb-4377-45f8-9a23-88e54e4afb08] Pending
helpers_test.go:344: "busybox" [e5a2d1eb-4377-45f8-9a23-88e54e4afb08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e5a2d1eb-4377-45f8-9a23-88e54e4afb08] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003553148s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-233969 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-233969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-233969 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-813424 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eeb80fff-1a91-4f45-8a17-c66d1da6882f] Pending
helpers_test.go:344: "busybox" [eeb80fff-1a91-4f45-8a17-c66d1da6882f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eeb80fff-1a91-4f45-8a17-c66d1da6882f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003334431s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-813424 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-813424 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-813424 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-233045 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-233045 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (42.933319625s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (683.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-233969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-233969 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m22.799372525s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233969 -n no-preload-233969
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (683.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-233045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-233045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.04395216s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-233045 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-233045 --alsologtostderr -v=3: (10.494536701s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-233045 -n newest-cni-233045
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-233045 -n newest-cni-233045: exit status 7 (62.154147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-233045 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (286.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-233045 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-233045 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (4m46.732702194s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-233045 -n newest-cni-233045
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (286.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (558.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-813424 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 18:08:15.961827   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-813424 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m18.118960735s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813424 -n default-k8s-diff-port-813424
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (558.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-079123 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-079123 --alsologtostderr -v=3: (2.274137622s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-079123 -n old-k8s-version-079123: exit status 7 (63.741704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-079123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-233045 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-233045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-233045 -n newest-cni-233045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-233045 -n newest-cni-233045: exit status 2 (227.456726ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-233045 -n newest-cni-233045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-233045 -n newest-cni-233045: exit status 2 (227.167537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-233045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-233045 -n newest-cni-233045
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-233045 -n newest-cni-233045
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-306581 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 18:11:44.337246   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-306581 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m33.96047221s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-306581 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2f88773d-1f7d-469a-94e6-7c554b44a087] Pending
helpers_test.go:344: "busybox" [2f88773d-1f7d-469a-94e6-7c554b44a087] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2f88773d-1f7d-469a-94e6-7c554b44a087] Running
E0819 18:13:15.961604   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00388984s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-306581 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-306581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-306581 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (615.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-306581 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-306581 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m15.562487667s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-306581 -n embed-certs-306581
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (615.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m23.14014769s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m22.542873736s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0819 18:33:15.961351   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m35.948620959s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-321572 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-321572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jqkr2" [19c78cf8-559c-40af-b1a8-9a535aa96608] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jqkr2" [19c78cf8-559c-40af-b1a8-9a535aa96608] Running
E0819 18:33:37.600262   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:33:37.608377   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:33:37.620165   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:33:37.641854   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:33:37.683350   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:33:37.765377   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:33:37.926967   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:33:38.249139   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:33:38.891241   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003595279s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-z9fh4" [e565cc80-f2b8-4e51-bd64-5b6f82bfedad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004583265s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-321572 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-321572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lg5ms" [fd8b2666-9b74-4978-8a7e-f6322d76f5bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lg5ms" [fd8b2666-9b74-4978-8a7e-f6322d76f5bb] Running
E0819 18:33:47.857132   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00496436s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-321572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0819 18:33:40.172868   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-321572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0819 18:33:58.098538   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m15.792055722s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m15.988225247s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rkxlh" [09e9af24-69c7-4c22-a006-d40f5a85301e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005062227s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-321572 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-321572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6c7xk" [3529bbee-bd70-4e71-98c5-0fd6b42a0705] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 18:34:17.272083   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:17.278523   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:17.289932   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:17.311403   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:17.352920   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:17.434426   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:17.596441   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:17.918395   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-6c7xk" [3529bbee-bd70-4e71-98c5-0fd6b42a0705] Running
E0819 18:34:18.560248   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:18.580835   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:19.842111   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:22.404602   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005056488s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-321572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (87.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0819 18:34:58.250998   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:34:59.542378   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m27.603878959s)
--- PASS: TestNetworkPlugins/group/flannel/Start (87.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-321572 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-321572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vszpl" [b0911b6f-f274-4d7c-963f-3c71bb802857] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 18:35:13.760501   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vszpl" [b0911b6f-f274-4d7c-963f-3c71bb802857] Running
E0819 18:35:18.882685   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:35:21.262880   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005474119s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-321572 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-321572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6qjhv" [0918ec60-a50b-4f34-b3fd-4ca25bc5d183] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6qjhv" [0918ec60-a50b-4f34-b3fd-4ca25bc5d183] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004714616s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-321572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-321572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0819 18:35:39.212294   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:35:49.605536   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-321572 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m24.531099241s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6c78c" [a6bf539d-0ab1-4521-bed2-44c5507d161b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004804794s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-321572 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-321572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5k68b" [c17b2839-2315-41d8-8126-f9910a3dd6d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 18:36:19.032426   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-5k68b" [c17b2839-2315-41d8-8126-f9910a3dd6d4] Running
E0819 18:36:21.463999   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004810175s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-321572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-321572 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-321572 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m54tk" [7273be91-4c5d-451f-b1f4-67ba6bae518e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m54tk" [7273be91-4c5d-451f-b1f4-67ba6bae518e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00367968s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-321572 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-321572 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E0819 18:37:52.488924   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:15.961616   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/functional-632788/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:28.898544   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:28.904905   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:28.916225   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:28.937549   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:28.978891   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:29.060355   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:29.221914   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:29.543638   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:30.184973   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:31.466716   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:32.920882   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:32.927301   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:32.938722   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:32.960118   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:33.001653   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:33.083110   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:33.244637   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:33.566328   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:34.028186   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:34.207702   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:35.489543   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:37.600041   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:38.050956   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:39.149801   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:43.172796   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:49.391981   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:38:53.415020   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:05.305384   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/no-preload-233969/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:06.725817   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:06.732209   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:06.743565   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:06.764958   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:06.806442   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:06.887906   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:07.049524   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:07.371401   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:08.013279   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:09.294927   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:09.873295   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:11.857149   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:13.896936   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:16.979344   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:17.272030   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:27.221554   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:44.975470   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/default-k8s-diff-port-813424/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:47.703815   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:50.834718   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:39:54.858247   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:08.630387   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:12.563885   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:12.570296   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:12.581723   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:12.603102   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:12.644546   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:12.726106   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:12.887862   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:13.209805   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:13.852112   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:15.134421   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:17.695987   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:21.262844   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/addons-825243/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:22.737396   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:22.743730   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:22.755015   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:22.776350   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:22.817750   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:22.817749   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:22.899203   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:23.061018   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:23.382764   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:24.024100   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:25.306000   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:27.867609   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:28.665325   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/calico-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:32.989639   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:33.059038   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:36.330749   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/old-k8s-version-079123/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:43.231199   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:53.540972   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:03.713342   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/enable-default-cni-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:08.749085   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:08.755454   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:08.766797   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:08.788160   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:08.829620   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:08.911333   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:09.072849   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:09.394574   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:10.036522   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:11.318344   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:12.756789   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/auto-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:13.880640   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:16.779788   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/kindnet-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:19.002042   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:29.243879   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/flannel-321572/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:34.502812   17837 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-10654/.minikube/profiles/custom-flannel-321572/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    

Test skip (37/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
266 TestStartStop/group/disable-driver-mounts 0.14
274 TestNetworkPlugins/group/kubenet 2.82
282 TestNetworkPlugins/group/cilium 3.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-814719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-814719
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-321572 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-321572" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-321572

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-321572"

                                                
                                                
----------------------- debugLogs end: kubenet-321572 [took: 2.673152331s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-321572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-321572
--- SKIP: TestNetworkPlugins/group/kubenet (2.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-321572 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-321572" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-321572

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-321572" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-321572"

                                                
                                                
----------------------- debugLogs end: cilium-321572 [took: 3.016882551s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-321572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-321572
--- SKIP: TestNetworkPlugins/group/cilium (3.17s)

                                                
                                    
Copied to clipboard